on May 17, 2013 by Phillip Lord in Uncategorized, Comments (2)

Linked Research

Controlling information is just an idea, after all – not to be mistaken with something new. An idea that’s like a splinter in the minds of some, driving them mad. It is evident that the meme came to be long ago and evolves continuously.

The forces that wish to make a change, as well as to resist are in front of us. Thus, as the position of a polemic goes, I must explain to you how all this idea of denouncing change and praising obedience was born and I will give you a complete account of the system, and expound the bullshit, the master-builder of human laziness.

This is not an attempt to convince you with the same arguments that you have heard countless times about the wonderful world of free and open access to knowledge. It is neither a roadmap or a proposal on how to get there by ticking all the latest check-marks. Such endeavours while fruitful, they are still bound to endless struggle of changing control and power.

At the heart of our challenge are not reasons of technical or social. We tend to get sucked into the uncertainties and contravening evidence on practically any given argument. The details, so to speak, essentially slows down the rate of progress. But, we do it any way.

To take a trivial example, which of us ever undertakes laborious exercise to acquire and share research knowledge, except to obtain some pluses in one form or another, all in the name of making our contributions to humanity? But who has the right to criticize a man who chooses to enjoy the pleasures of a mental masturbation that has no annoying consequences, or one who avoids to voice themselves that produces no resultant gain?

Do we then have the right to criticize those that stand in the way of others? How about the ones that wish to silence others because it may bring them potential discomfort in the future?

I feel I owe you an apology; my primary agenda is to ask for your consideration to make a change in however way you feel about the matter of publishing and sharing knowledge, as opposed to sitting on the sidelines and watching things unfold and accepting whatever is handed out to you. Your voice matters.

This brings me to echo an idea that’s similar, if not, the same as what you are already familiar with. The idea is for Web researchers to eat their own dog-food when it comes to sharing their knowledge. To use the technologies and tools that’s native to the Web. To continue to push the boundaries on how we acquire and disseminate knowledge using the Web stack. And to be able to tap on to other’s knowledge by accessing, discovering, re-mixing and sharing in an ubiquitous matter. Some might say, this is employing public-funding to its fullest potential. While others would say that they are already doing that.


A proposal to shift towards Linked Research:

Start with HTML and enrich it semantically; encapsulate everything from meta-data, references, claims, conclusions, to steps needed to reproduce the research itself. Present it with a CSS that’s helpful for your readers. Use JavaScript to offer interactivity to better communicate your findings and further experiment. Use other technologies from the same bag that’s labeled as “Web friendly” as you see appropriate. Publish at an URL so that both humans and machines can access and discover. Offer a comment system so that your colleagues can provide their review or feedback to improve your work and have it openly accessible right away. Publicly announce your research so that you get the karma points that you seek all under your authority, and dominate the search-engine results, without having to ask anyone for permission.

Or, we can simply continue on with the existing practices. Be at the mercy of “people in power” or needs of businesses because we are too comfortable with a pipeline that’s constructed by those in charge. If rated using TimBL’s Linked Open Data stars, it will get us 3 stars tops on a sunny day. Who cares about the stars right? The same star-system that we try to slap government stakeholders or all other data-huggers with. Hypocrisy or being arrogant about knowing how to use our own technology? Perhaps it is just laziness or obedience?

At this point you are either buying my kool-aid, or you think I’m an ignorant fool. Either way, are we going to bark all day at each other, or are we going to do something for a change?

Sarven Capadisli

2 Comments

  1. Bruce R. Barkstorm

    May 23, 2013 @ 11:58 pm

    ———- Forwarded message ———-
    From: Bruce Barkstrom
    Date: Thu, May 23, 2013 at 5:14 PM
    Subject: Re: [FoRCnet.org] Re: Make research available in Web friendly formats
    To: forcnet@googlegroups.com

    Thank you and Susan for the clarification about “Web-friendly” formats.

    I also understand the difference between the Web stack and the desktop environment.
    In the atmospheric and oceanic part of the Earth sciences, there are roots that go back
    into the more distant past with mainframes and supercomputers – where FORTRAN is
    still used.

    I think I should mention two formats that some parts of the Earth science community use:

    1. ASCII – used in the NOAA Global Historical Climate Network (GHCN) and IGRA
    data sets. GHCN contains records of temperature and humidity collected at ground
    stations over about the last 150 years. These data are fundamental to much of the work on
    temperature increases associated with climate change. IGRA contains records of
    temperature, humidity, and barometric pressure measured from instruments carried
    upward on balloons launched from the Earth’s surface until the balloons burst in the vicinity
    of the stratosphere. The IGRA type of data have been collected by international
    agreement since about 1950.

    2. Hierarchical Data Format (HDF) – used for many collections of satellite data,
    particularly in the atmospheric and oceanic communities. HDF is highly sophisticated
    data formatting software that is quite efficient for numerical data. It can handle
    various kinds of arrays and includes an internal file “directory” that can contain
    a great deal of “metadata”. The software has C and FORTRAN interfaces.
    It’s also used as some of the format standardization in the NCAR project (UNIDATA)
    to produce an XML vocabulary for dealing with spatio-temporal sampling patterns:
    the Climate and Forecasting Profile (CF), which is usually identified with NetCDF.

    There is also a fair amount of XML standardization clustered around metadata,
    particularly from the geospatial community. That community has strong roots
    in the surveying and map-making communities and is part of the tool set associated
    with Geographic Information Systems (GIS). Not all Earth science data dealing
    with physical fields in the atmosphere and oceans fit well with horizontal areas
    and geolocation. For example, the “solar constant” (or, perhaps more precisely,
    “solar irradiance”) is located at a standard distance from the Sun’s center and
    is assumed to be the same for all longitudes and latitudes – just a time series.

    Hope the specificity helps.

    Bruce B.

  2. Bruce R. Barkstorm

    May 24, 2013 @ 12:00 am

    This material followed some additional looking at
    On Thu, May 23, 2013 at 4:02 PM, Sarven Capadisli wrote:

    Hi all,

    I’d like to invite you to read an article I wrote on Linked Research [1].

    Feedback is most welcome either here or on the site. Actions are preferable =)

    [1] http://csarven.ca/linked-research

    -Sarven
    http://csarven.ca/#i

    ———- Forwarded message ———-
    From: Bruce Barkstrom
    Date: Thu, May 23, 2013 at 5:37 PM
    Subject: Re: [FoRCnet.org] Linked Research
    To: forcnet@googlegroups.com

    It might be helpful to take into account a standard quantification of the
    way communities adopt new ideas: innovation-diffusion theory. It was
    based on work by some sociologists back in the 40’s regarding the
    adoption of hybrid corn in the midwest, and a couple of other changes
    of a similar nature. The theory assumes that any community has a subcommunity
    of Innovators and a larger community of Imitators. There’s a simple set of
    equations you can put into a spreadsheet – and there are just three coefficients.
    This has been a popular model for marketing departments – and it works
    rather like a community getting “infected with an idea”.

    I first ran into it in an excellent book: Thomas, R. J., 1993: New Product
    Development: Managing and Forecasting for Strategic Success, Wiley,
    New York, NY, where the equation appears on p. 190. There has been
    a lot of empirical work from the advertising and marketing folks. [If you go
    to the equations, the numerical values for the two key coefficients are
    Ino = 0.03 and Imi = 0.38 – although you might want to check the web
    for new information on the equations and coefficients. There’s some
    opinion that these numerical values don’t reflect the rate of communications
    with social media. Changing the coefficients isn’t hard – getting the numercial
    values takes a good deal more work.]

    At any rate, playing around with the equations gives the usual S-shaped
    curve for the fraction of a community adopting a new idea as a function
    of time. The results suggest that it takes about a decade to get to 50%
    adoption and longer to reach market saturation. The Semantic Web is
    a bit new in those terms, so it’s probably not useful to expend a lot of
    emotional energy on the lack of commitment by everybody.

    Bruce B.

Leave a comment

Login