“A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” – G.H. Hardy
What if the Industrial Revolution never ended? What if Da Vinci was not merely a creative genius but had many teachers due to the influx of Greek scholars after the fall of the Byzantium Empire? What if the future is just a stepwise evolution from the nearest-neighbor in systematic composition while directed by correspondence, one recombination after another, pushing along what has already happened while shaped by the pull of the potential? What if that means that the nearby future is highly predictable and we are living in an era witnessing the emergence of ways of making which continue to shorten the path between idea and actualization?
After several millennia, from the first brick to reusing the same glyphs to simplify writing, to water wheels and lenses, assembly lines, robotic arms and general purpose computing machines, we’re about to enter a phase in our evolution where we will have collectives of machines which can reproduce just about everything, like a CD player can reproduce all sorts of music, an LCD screen can show anything from drawings to movies, and a printer can reproduce any book in the world, written in any language at any time in history. The concept of ‘printing’ holds within itself a maximal potentiality for manufacturing processes. The idea of a printer is interesting as, in theory at least, one can print out the whole range of documented human thought using a printer. Likewise, a nano-circuit printer can spit out a design from any mobile phone company irrespective of the manufacturer of the printer itself. Similar to how computer software works, or how moveable types made the Gutenberg printing press so flexible, a printer will allow for a local maximum of interchangeability, while increased competition will ensure the commoditization of its many parts.
Nowadays we have tools making tools, some even capable of reproducing themselves. Several technical domains are forecasted to advance towards the level of an ‘information science’ during the following two decades. This is a stage where the technological application and technical means reach a ‘general purpose’ state, and the end-result will be determined by a sort of software, a set of instructions on how this ‘general purpose’ tool is to behave, such as lighting up a number of pixels on a screen in a variety of colours so that this text appears on this particular location. The currently identified domains are bio-technology, nano-technology, robotics, information and communication technology and the cognitive sciences. Not only are these different areas evolving in an accelerated pace as information, technology, “feeds back on itself”, it is also highly exchangeable which is causing a high degree of cross-fertilization amongst these domains, further accelerating its evolution. E.g. Artificial Intelligence is a bio-info-cogno combination, and if we’d mix this with programmable matter it can join nano-robo to the mix.
This is an adequate stage to introduce an intriguing idea concerning “novelty density”, the technological singularity. If we consider the gradual spread and development of ideas since recorded history we are nearing a time where ideas combine and recombine at such a high speed that their application will not have a definitive form anymore but exist in a state of continuous renewal. This is for example very obvious in the case of personalization, where clothes can be made for an exact fit, a medical treatment is composed specific for the person’s condition and habits or a headset is etched and embedded into someone’s favorite glasses. None of these end-products will be the same, at least not intentionally.
When tracing back its origins science and technology used to be two very distinct disciplines. It is even so that the type of persons involved are quite distinct, with engineers tending to be the pragmatic hands-on type of person, and scientist a bit more absorbed in figuring out a tool, machine or procedure. As the term already indicated, an engineer tends to work more with his/her hands while a scientist works more with his/her head. Different tools, different results. Once in a while in history these disciplines come together, combine and fasten the pace of evolution, such as mixing geometry with construction that lead to a jump in ways of building and machine making, using mathematical formula not only as a descriptive framework but in many occasions also as a prescriptive framework. Within the context of information technology the latter is a programming language and in essence you can regard chemical notation as a programming language, or quantum field theory, or optics, or thermodynamics, or mechanics, or musical notation. You get something with the expressive power of a language, with a functional grammar and syntax, and a sort of alphabet.
During the singularity, the combinatorial explosion of both intra-domain and cross-domain advances becomes so fast that we cannot give it a meaningful measure anymore. Ideas, procedures, methods, programs, can jump from one language to another, but if they are in bordering fields they can provide new functionality which in turn make the previously impossible possible. For example carbon nanotubes can already be fabricated with a length of one meter, eventhough the individual tubes are so thin they are invisible to the human eye. Once this fabrication process has been improved to the state that it is economically viable, mixing this with weaving techniques will allow for rope, cables, duct tape, wall paper, concrete bricks, rubber or asphalt. It is hard to imagine what it means to have so many materials with the strength of diamond, and what the impact will be of a washing machine or a pair of worker jeans that doesn’t break anymore or glasses that don’t scratch. Surely it will mean the end of the ‘throw away’ culture of mass produced goods that are designed to break. ‘Planned obsolescence’ as it was introduced in the late forties will need to give way to other market forces, like fashion trends. Besides articles that are simply rare or inimitable, what does it mean for other forms of artificial exclusivity, most importantly the protection of intellectual property via copyright and patents? What will the impact be when the chemical signature of the most valuable rare chemical elements on the periodic table can be simulated with a combination of cheap alternatives? What happens when there are so many discoveries that duplicate patents are becoming the norm? How can the language used in a patent ensure its uniqueness? And even if it is unique enough, can simple variations result in the same outcome without being covered by the patent? To what extend will similarities amongst different patents be considered equivalent in favor of the original patent? And is that fair?
Some companies have set up their patents in such a way that a discovery of a possible new area of applications, that the patent is automatically deconstructed to its elementary building blocks and their join points, and a computer system will start generating variations on the core patent. This can be twenty variations, two hundred or two thousand. This way such a company will try to patent a possible industry. Also, it makes it unclear for their competition what has actually been discovered, so that they can reverse-engineering the same kind of solution. But patents are also meant as a show of muscle, a signpost that this company might be willing to defend their patent in a court of law, but more importantly that they are eager to strike a commercial deal with other companies concerning reusing their patent or even the products described and thereby grow their business. In particular the software industry, it is not really worthwhile to defend patents in court, unless you have an illusionary exclusive company named after a well-known fruit and are eager to bully new entries off of your perceived turf by threatening them with expensive court cases that will bankrupt any start up.
That may sound reasonable when a company has spent years figuring out how something actually works, such as with medicinal treatment, and they are rewarded with this imposed exclusivity with a patent, but there are many bio-tech companies too that have simply patented or copyrighted the gene sequence of a common disease or its healthy form and charge an arbitrary large sum of money for sharing that information. Mapping the human genome has been a collective effort that took quite some years, but with every increase in knowledge the technologies improved and it has essentially become an information science. DNA sequencer machines are simply fed the information of the gene sequence and an exact copy of a disease or the cells of an organ can be reproduced. Even in the commercially oriented US patent system it says that a patent refers to “the right granted to anyone who invents any new, useful, and non-obvious process, machine, article of manufacture, or composition of matter”. Maybe thirty years ago a gene sequence was non-obvious, but with current technology it is primarily a matter of number crunching an enormous database of research results and trying to find correlations that indicate a causal relation. Likewise, with general purpose robotics, the advances in nano-science, meta-materials and programmable matter, can we still say that descriptive patents are non-obvious? Especially as information technology is reaching a stage where inventions can be automatically deconstructed the same way as patents are and new inventions can be grown using evolutionary algorithms which simulate the act of invention. Computing systems are already used to grow mathematical or chemical formulas, and by 2020 such automated discovery engines will be powerful enough to spit out potential leads on a daily basis.
If the patent system is becoming increasingly inadequate, what else is there? Well, companies can be secretive about their R&D by simply not sharing it, or sharing only the end-results in a non-obvious way by scrambling, encrypting, obfuscating, cloaking or any other technique that hides the actual invention. Additionally they can opt to protect their intellectual property by treating it as a trade secret, making sure that the vital information remains confidential via non-competition and non-disclosure, but even better is to simply have some “secret formula” that is only known to a handful of people. Again, referring to the above mentioned company named after a well-known fruit, such secrecy can be applied all throughout the company, to internal projects, to release schedules and marketing campaigns, and any another kind of information about the products so that one can tightly control the impression that the product range has. Added to some previous laws adopted during the Bush era, with the new copyright laws it has become possible to control online media by addressing unwanted news coverage as copyright infringement. As has become the norm, providers are all too aware of the costs of giving this the attention it deserves and they simply remove such articles whenever complaints start spilling over. Be that as it may, such an environment and company culture is often not that welcoming or challenging for top-talent and as a result their R&D will steadily degenerate towards a second-rate copy shop, which may not be that bad for business and considering the advent of automated invention a current head start can provide enough momentum, traction and path dependency to last up to 2020 for sure, maybe even 2025.
Still, full frontal secrecy is only one sort of business model. Cooperation is another. Now, what if it is possible to take the patent system and the trade secret system and mix them? Extrapolating patents towards the nearby future, its description will need to become as specific enough to ensure uniqueness and to meet that requirement the difference between a patented invention and the actual implementation is greatly reduced. Most efficiently it would describe, in the appropriate “programming language” how the invention is realized. On the other hand trade secret can be ensured by an information exchange infrastructure of digital certificates so that information can be securely shared. This infrastructure can be set up in ways that honors such secrecy by avoid any readable display or avoid temporary storage, so that a business partner can use the ‘secret sauce’ on a pay-per-use manner. Again, we can draw upon and extend current systems of electronic data interchange.
There is a definite shift going on, from owning the actual production process and sharing the end result, to owning the rights and sharing the production process, to owning the “secret formula” and sharing the invention itself. What if someone comes up with a universal design exchange language, a sort of MIDI which simply describes the input or output of a machine, but doesn’t tell it how to do it. Enveloping these with a machine-to-machine digital certificate infrastructure, a system can be constructed for when the factory hall turns into a general purpose 3D-printing service station run by self-assembling software robots.
Patents are a legal affair dealt with in a complete outdated modality, relying on procedures and an ineffective, impotent and overly expensive system that is unsuitable to deal with the demands of our future. If it is such a vital aspect of the free market, there are better ways, by automating the patent system and providing an infrastructure that allows sharing its temporary copies. That way the return on a patent is based on actual consumption instead of perceived value, which can lie very much apart. Possibly it has not been automated yet as it is an industry in its own right, but that will soon be over when series of novelty wave reshape the legal landscape. As programming languages go, law itself seems ripe for an overhaul.