Showing posts with label prior art. Show all posts
Showing posts with label prior art. Show all posts

30 November 2010

Why I'm Rooting for Microsoft

It will not have escaped your notice that the patent system has been the subject of several posts on this blog, or that the general tenor is pretty simple: it's broken, and nowhere more evidently so than for software. Anyone can see that, but what is much harder is seeing how to fix it given the huge vested interests at work here.

On Open Enterprise blog.

11 February 2009

India Fights Patents with Huge Prior Art Database

One of the many problems with the patent offices around the world is that they are often unaware of prior art, granting patents for so-called inventions that are, in fact, common knowledge. In the computer world, there have been a number of efforts to provide prior art to patent offices, either after a patent is granted, in order to have it rescinded, or – even better – as part of the examination process. That's fine for a community with easy access to online source materials, but what about other fields, where prior art exists in other forms like books, or perhaps orally?

This is a particularly thorny problem for the sphere of traditional medicine. Substances derived from plants, for example, may have been in use for literally thousands of years, and yet patents may still be granted – especially in Western countries ignorant of other ancient medical traditions.

Perhaps the best-known example of this is the case of turmeric, commonly used as a spice in curries, for which patents were granted in 1995 on its wound healing properties by the US Patent Office, even though these supposedly novel uses had in fact been known for millennia.

To combat this problem, and to prevent its huge traditional knowledge basis being exploited in this way, India has created the Traditional Knowledge Digital Library (TKDL) database, which was unveiled on 2 February, and is now available to the Patent Examiners at the European Patent Office for establishing prior art in case of patent applications based on Indian systems of medicine.

Here's some background information on how the database came to be created and was set up:


The genesis of this maiden Indian effort dates back to the year 2000, when an interdisciplinary Task Force of experts was set up by AYUSH and CSIR, to devise a mechanism on protection of India’s traditional knowledge. The TKDL expert group estimated that about 2000 number of wrong patents concerning Indian systems of medicine were being granted every year at international level, mainly due to the fact that, India’s traditional medicine knowledge exists in languages such as Sanskrit, Hindi, Arabic, Urdu, Tamil etc. and was neither accessible nor understood by patent examiners at the international patent offices due to language and format barriers.

The TKDL breaks these barriers and has been able to scientifically convert and structure the information available in languages like Hindi, Sanskrit, Arabic, Persian, Urdu and Tamil, in open domain text books into five international languages, namely, English, Japanese, French, German and Spanish, with information contents in 30 million A4 size pages, with the help of Information Technology tools and a novel classification system - Traditional Knowledge Resource Classification (TKRC).

This is a huge, multilingual resource – something that could only be put together with governmental support and resources. It is also fairly specific to the domain of traditional knowledge. Nonetheless, it's a great example of how an extensive prior art database can be created and then made readily available to the patent authorities in order to help prevent patents being granted unjustifiably. It's a pity that we are unlikely to see anything quite like it for other knowledge domains.

02 November 2007

Desperately Seeking Pamela

Groklaw's Pamela Jones is a true eminence grise of the area in the intellectual Venn diagram where computer technology and law intersect. And yet, as befits her eminent greyness, she's a shadowy figure - some have even gone so far as to claim that she does not exist.

Against that background, this interview is all-the-more welcome, not least because it contains insights from PJ such as the following:

What is so unique about IP and FOSS is that computers are a relatively recent thing. So is FOSS. So there are people still alive who remember very well the early days, the beginnings. That has implications for prior art searching, for example. It had implications in the SCO litigation, because when SCO made broad claims in the media, there were people saying, "That's not so. I was there. It was like this..."

Oh yeah: now, why didn't I think of that?

23 January 2007

Microsoft's Eternal Cheek

This is rich:

In this culture of instant information, some Microsoft Corp. researchers are pursuing a radical notion -- the concept of saving messages for delivery in decades, centuries or more.

The project, dubbed "immortal computing," would let people store digital information in physical artifacts and other forms to be preserved and revealed to future generations, and maybe even to future civilizations.

So, the company that more than anyone has tried to lock people into opaque, closed formats that will be unreadable in a few decades, let alone a few millennia, and which even now is trying to foist more of the same on people, suddenly discovers the virtue of unconstrained accessibility.

But to add insult to injury, it then tries to patent the idea. Earth to Microsoft: this is called openness, it's what you've been fighting for the last thirty years. There's a fair amount of prior art for the basic technique, actually.

18 September 2006

Not So Patent

Squirreling away prior art in an attempt to stave off software patents sounds like a jolly sensible idea. But that old curmudgeon, Richard Stallman, points out some very cogent reasons why in fact this isn't such a jolly sensible idea. Essentially, the only solution to software patents is to abolish them.

04 April 2006

Coughing Genomic Ink

One of the favourite games of scholars working on ancient texts that have come down to us from multiple sources is to create a family tree of manuscripts. The trick is to look for groups of textual divergences - a word added here, a mis-spelling there - to spot the gradual accretions, deletions and errors wrought by incompetent, distracted or bored copyists. Once the tree has been established, it is possible to guess what the original, founding text might have looked like.

You might think that this sort of thing is on the way out; on the contrary, though, it's an extremely important technique in bioinformatics - hardly a dusty old discipline. The idea is to treat genomes deriving from a common ancestor as a kind of manuscript, written using just the four letters - A, C, G and T - found in DNA.

Then, by comparing the commonalities and divergences, it is possible to work out which manuscripts/genomes came from a common intermediary, and hence to build a family tree. As with manuscripts, it is then possible to hazard a guess at what the original text - the ancestral genome - might have looked like.

That, broadly, is the idea behind some research that David Haussler at the University of California at Santa Cruz is undertaking, and which is reported on in this month's Wired magazine (freely available thanks to the magazine's enlightened approach to publishing).

As I described in Digital Code of Life, Haussler played an important role in the closing years of the Human Genome Project:

Haussler set to work creating a program to sort through and assemble the 400,000 sequences grouped into 30,000 BACs [large-scale fragments of DNA] that had been produced by the laboratories of the Human Genome Project. But in May 2000, when one of his graduate students, Jim Kent, inquired how the programming was going, Haussler had to admit it was not going well. Kent had been a professional programmer before turning to research. His experience in writing code against deadlines, coupled with a strongly-held belief that the human genome should be freely available, led him to volunteer to create the assembly program in short order.

Kent later explained why he took on the task:

There was not a heck of a lot that the Human Genome Project could say about the genome that was more informative than 'it's got a lot of As, Cs, Gs and Ts' without an assembly. We were afraid that if we couldn't say anything informative, and thereby demonstrate 'prior art', much of the human genome would end up tied up in patents.

Using 100 800 MHz Pentiums - powerful machines in the year 2000 - running GNU/Linux, Kent was able to lash up a program, assemble the fragments and save the human genome for mankind.

Haussler's current research depends not just on the availability of the human genome, but also on all the other genomes that have been sequenced - the different manuscripts written in DNA that have come down to us. Using bioinformatics and even more powerful hardware than that available to Kent back in 2000, it is possible to compare and contrast these genomes, looking for tell-tale signs of common ancestors.

But the result is no mere dry academic exercise: if things go well, the DNA text that will drop out at the end will be nothing less than the genome of one of our ancient forebears. Even if Wired's breathless speculations about recreating live animals from the sequence seem rather wide of the mark - imagine trying to run a computer program recreated in a similar way - the genome on its own will be treasure enough. Certainly not bad work for those scholars who "cough in ink" in the world of open genomics.