Header image alt text

Naavi.org

Building a Responsible Cyber Society…Since 1998

At present date, Quantum Computing stands towards traditional computing like a horse did towards the Wright Brothers’ plane. The horse was much faster, but the plane could move in a tridimensional space. And we all know how the horse and the plane evolved since then, now don’t we?

Geordie Rose founder of D-Wave, 2015

To address this topic and then to place it within a context of potential leverage towards themes such as Artificial Intelligence, Secure Corporate Communications, Competitive Edge towards the marketplace as well as others … it is mandatory to start by clearly defining WHAT computing is and WHERE does Quantum Computing stand out.

So, Computing as we know it

A computer is a device that manipulates data by performing logical operations, hence computing is that precise “manipulation” action which allows data to combine and translate into added value information.

The software is the set of instructions that convey what needs to be done with the data, while the hardware is the set of electronic and mechanical components over which the data operations take place according to the provided instructions.

While the core of our universe is the “subatomic world”, meaning the Quantum particles that make all the atoms’ basic components (Protons, Neutrons, and Electrons) the core of computing (as we, humans, have developed it) consists of two logical statuses, On and Off (1/ 0) and its “base element” is called the “bit”.

So, it is a binary system where the basic components (the bits) can univocally present a status of either “1” or “0”.

Mathematically, the human being has grouped this component in clusters of 8, called “bytes” and the logic behind those bytes is that from the bit to the far right towards the bit to the far left (of the 8), each would represent a base 2 exponential figure, meaning:

  • the bit further to the right is 2 elevated to 0, therefore representing number 1
  • the following to the left is 2 elevated to 1, therefore representing number 2
  • the one farthest to the right will be the 2 elevated to 7, therefore representing 64

 

Now, the core of our “modern” computers started by splitting the Byte into two segments of 4 bits each, from left to right the first 4 would represent a number under the form of a base 2 power, while the other 4 bits  would provide the information about which type of data was to the right: a number, a letter an instruction, other. This was called the ASCII table.

The evolution of computing led this initial context to grow both in terms of numbers of bits applied to deal with the information, as well as the speed at which those operations would take place.

From 8 bits in the mid-1990s we moved to 16, 32, 64 and so on while the speed raised from some megahertz to 1 gigahertz, then 2, 4 and it keeps evolving.

In 1965, Gordon Moore the co-founder of Fairchild Semiconductor and Intel, predicted (based on observation), that the number of transistors in a dense integrated circuit would double every two years for the following decade, therefore so would the computing capacity. In fact, the rate has been observed now for several decades, and that constitutes Moore’s Law.

Quantum Computing

Quantum computers are similar to “traditional” ones in the sense that they also use a binary system to characterize data, the difference lies in the fact that Quantum computers use one particular characteristic of subatomic particles (in specific the electrons), called the “Spin” to account for the status “0” or “1”.

The Spin is a rotational/vibration characteristic of subatomic particles that is “manageable” since it responds to magnetic fields, therefore, and in very, very simple wording, while in “traditional computers, humans control the bit status by applying or not power to a given bit; in Quantum Computers, we can affect the Status “Spin-up” which corresponds to “1” or “Spin Down” which corresponds to “0” by applying either variation to a magnetic field or a microwave focused pulse.

And what a difference this makes!

Once we move beyond the atomic world and start manipulating electrons one by one, very strange things take place.

Note: electrons are the particle of choice by two orders of reason, they are the “easiest” to extract from an atom and they behave and become photons once extracted, therefore, being able to transport information over distance as light wave particles.

Subatomic particles behave both as matter and waves, bearing the extraordinary characteristic of being able to represent both Spin-up and Spin Down status at the same given point in time.

Not to spend a couple of thousands of words describing in detail how this is possible and all the multidimensional implications that it represents (parallel universes and so on …), I will just advise you to take a look at Professor Richard Feynman lectures about Quantum Physics.

Now due to this specific characteristic of Quantum Computers (the Quantum particles), this is the point where any similarity between “traditional” computers and Quantum Computers ends.

Making the picture crystal clear, in a “traditional” computer to test all possible combinations within one set of just 4 bits so the one that applies to a given circumstance may be found, the machine goes about each of the following combinations one at a time.

Taking 16 different operations.

Now, since the Quantum computer’s bits (called Qubits) bear the capacity to represent both statuses at the same time, this process would merely require one single operation on a 4 Qubit Quantum computer!

If instead of “half a byte” (4 bits, like represented above), we speak of the latest generation software that deals with 128 bits, guess what? Analyzing all possible combinations amongst those 128 bits would require exactly one single operation on a 128 Qubit Quantum Computer!

I think that, by now, you are starting to get a picture of the involved potential, still let me give you a “hand” here; a 512 Qubit Quantum Computer would be able to analyze more data in one single operation than all the atoms that exist in the Universe.

And Quantum computing has a “Moore’s law” of its own, instead of the momentum being of doubling the processing capacity each two years, each new generation has proven to be 500 thousand times more powerful than the preceding one.

Going back to the analogy between the horse and the Wright Brothers’ plane, it’s like if they had given birth to the Lockheed SR 71 A Black Bird plane, which can fly at a speed of almost 2,200 miles per hour… now imagine what will happen a couple of generations into the future…

Constraints

Here are some constraints towards the establishment of real to the letter Quantum Computers:

  • The environment

As previously mentioned, the phenomena that allow Quantum computing to be such a powerful tool resides in the ability of subatomic particles to simultaneously represent several states; in Physics, this is called “superposition”.

Now, opposite let’s say to Quartz, which is used in modern day clocks because its molecules present a constant vibratory rate that allows high precision at a wide range of environmental conditions from pressure to temperature, humidity, luminosity and so on …, superposition only happens if no external factors are “exciting” the subatomic particles, meaning the subatomic particles only behave like that before having been exposed to any external factor.

It would be enough to have a Quantum Computer Chip hit by sun light to render it inefficient.

Therefore, a Quantum Computer is basically composed of one chip the size of a finger nail and a support cooling and isolation shell the size of an SUV that ensures the required “sterile” and isolated operational environment, and it costs around $ 25 million.

  • Algorithms

Writing algorithms for Quantum Computers requires the ability of thinking and taking into account the laws of Quantum Mechanics, therefore not the task for a common developer.

Peter Shor, from MIT, has developed one Quantum Algorithm (the “Factoring algorithm”) that led the Intel community to the verge of a nervous breakdown by rendering most encryption keys ineffective. Basically, while the most powerful standard computer would take hundreds of years of continuous processing to get there, if tomorrow any of us would have the chance of bringing home a Quantum Computer with the Factoring Algorithm embedded in a software piece, we could break any RSA encryption in a matter of seconds, making all the bank accounts or electronic transactions that we could “look at” absolutely transparent.

Lov Kumar Grover Ph.D. at Stanford and currently working at the Bell Laboratories developed a Database Query Quantum Algorithm that bears the uniqueness of being able to get the right information over a vast unstructured database over a few seconds. Like finding a needle in a colossal haystack within a few seconds.

  • Particle manipulation

The existing current Quantum Computers are technically only partial quantum, since they are able to use strings of electrons and not yet each electron individually. However, a Laboratory experiment in Australia’s South Wales University has recently been able to do so, therefore, maybe the next generation of Quantum Computers will.

Potential

All of this is something that is being developed “as we speak”.

In 2011 the development stage of Quantum Computers allowed the tremendous accomplishment of calculating in one single operation the expression 3*5=15. Yes, just that …

Now back then (in 2011), Dr. Michio Kaku, who is one of the brightest minds of our era, stated in an interview that it was not clear by when would we have the first operational and useful Quantum Computers.

Four years after, in 2015, D-Wave (a Canadian company that produces Quantum Computers), after having developed a Quantum Computer for Lockheed Martin (the company that amongst many other military assets produced the F-22 Raptor fighter jet), produced another one which resources are being shared by Google, NASA and USRA to perform calculations that normal computers (no matter how powerful they are), are not capable of accomplishing within a reasonable time frame (meaning less than 100 years working non-stop).

This last machine is being used (since 2015) for the purpose of:

  • Artificial Intelligence investigation and development
  • Development of new drugs
  • Autonomous machine navigation
  • Climate change modeling and predictions
  • Traffic control optimization
  • Linguistics

 

Building a Quantum Computer doesn’t mean a faster computer, yet a computer that is fundamentally different than a standard computer.

Doctor Dario Gil, Head of IBM Research

We are flabbergasted by the number of things standard computers are capable of solving and how fast they do it, yet there are several things they are either not capable of solving or it would take them so much time that it would bring us no benefit.

Can’t think of any?

Well, here are some:

M=p*q – If someone gives you a given number M which is the product of two unknown very large prime numbers (p and q) and asks you to find them, although there are only two prime numbers that meet the requirement this is extremely hard to accomplish and would require several sequential divisions by prime numbers until you get there. It is in fact so difficult that it is used as the basis for RSA encryption, remember from above?

By the way, the D-Wave machines are not yet at the maturity point which allows dealing with such extremely complex problems.

Highly advanced alloy leagues – molecules for when electron orbits overlap and while dealing with well-known simple elements, like Hydrogen and Oxygen it is very easy to determine the outcome of such combination H2O or water, if we use highly complex elements while attempting to create new materials, that requires tremendous computing power and trial and errors, because those molecular bonds depend on Quantum Mechanics.

The simplest example can mean 2 to the power of 80 combinations in need of being calculated to reach the solution that leads to a stable molecule, which would take years on a standard computer but just minutes in the current state of Quantum Computing capacity.

The most recent D-Wave computer was successfully used in 2016 by a joint team composed of participants from Google, Harvard University, Lawrence Berkeley National Laboratories, Tufts University, UCS Santa Barbara and University College of London to simulate a Hydrogen molecule. This opens the door for the accurate simulation of complex molecules which may result in exponentially faster achievements with much fewer expenditure achievements in the fields of medicine and new materials.

Logistics optimization – Logistic systems are some of the most complex days to day contexts that humans face which have a tremendous financial impact on the global economy. Let’s consider the example of DHL, this international corporation’s Core Business is based on getting a given physical asset from geography A to geography B within a time frame that its clients are expecting when hiring them. To accomplish that, the company has several “back to back” running services contracts with logistic operators, besides having its own fleet of planes, boats, and cars. Nevertheless, having the entire system optimized even under perfect conditions, where no strikes or natural disasters happen is hard enough because a one-minute delay at reaching a given traffic light may impact the 1-day delay in delivering the asset across the Globe. Quantum computing will allow, through data input from live monitoring sensors across the Globe, to constantly optimize routes and available cargo space, in a way that could easily represent a 600% profit increase over current operational standards or a significant price reduction towards clients, while assuring accurate and optimized delivery timings.

Predicting the future – ever watched “The Minority Report” with Tom Cruise? In the movie, although through a different process, computation was able to show what had over 90% probability to happen concerning potential crimes. Dealing with a complex scenario, the likes of an international crisis, it is “merely” a matter of computing power which can deal with an exponentially larger range of influencing co-factors that may affect the result. A standard computer would take years to reach the most probable outcome of such crisis, long after the crisis had been “naturally” solved, yet a Quantum Computer can show the top 5 most probable outcomes within a matter of minutes, therefore becoming a priceless decision support tool.

 

Artificial Intelligence – to begin with, let’s define Intelligence as the ability to acquire new knowledge and change one’s opinion based on such new information. Now The contribution of Quantum Computing to the potential of AI once again pertains speed and this time around “speed of thought”. How powerful would it be a “mind” that could analyze a complex scenario (like the above-mentioned logistics nightmare of a DHL alike company) and promptly decide which course of action to take and where to improve things in terms of processes by assessing that some established workflow is no longer suitable?

The problem would then be, having AIs making decisions and replacing them with new ones at a rate that humans had no time to understand the underlying motives, hence no saying in the approval/ disapproval of such strategic actions.

Safer communications – Quantum Cryptography, what is it?

We have seen that a Quantum Computer has the power to crack our state of the art current encryption pillars, but if it has the power to crack it, it has the power to create something better.

The problem of what we now can reach as methods of encrypting messages is that all of them depend on pre established keys, either unique or combinations of public and private keys and those keys are difficult to crack but only because of the methodology within reach of standard computers.

Now, Quantum Encryption cleverly exploits the initial problem of dealing with particles that behave like a wave until there is an attempt to observe them when they immediately behave like a particle.

Photons, if paired or entangled using the appropriate language, will each maintain their relative spin regardless of space or time, so four pairs of photons that transport each a status “01” conveyed by their spin, creating, therefore, a qubyte that is represented by “01010101” or any other combination for that matter, will maintain this “information” unaltered for as long as they are not “excited” and any attempt to read the code will immediately destroy it.

This bears the power of effectively creating unbreakable, full proof secure messaging.

P.S: This is a guest post published at the request of  Karl Crisostomo of tenfold.com and has reference to our earlier article titled “Section 65B interpretation in the Quantum Computing Scenario”

Naavi

 

 

I must admit here my excitement about Quantum Computing and discussing the impact of a principle of Physics for Cyber Law development, since I left my formal college education as a student of Physics, when the Quantum Mechanics was at its infancy and it is a feeling like being “Back to the Past” .

Though I had my post graduation in Nuclear Physics and studied Particle Physics to some depth, specialized in subjects such as Nuclear Forces etc., the subject of Quantum Physics was still new and not understood properly at that time. I  had even baffled everybody including myself in an interview at Physical Research Laboratory (PRL) in Ahmedabad when I solved a quantum physics question in real time put to me by the interviewers  who were interviewing me for the post of a “Scientific Assistant”  which most other interviewees had failed to do.

Though I refused the offering despite repeated requests to join and turned my back to the pure science, I never imagined that after 40 years I will return to study the impact of Quantum Mechanics to the present domain of my specialization which happens to be the Techno Legal aspects of Law.

But it appears that Cyber Law in India and elsewhere will be deeply impacted with emerging technologies of which Quantum Computing is one which will over turn many of the present concepts of law.

Hence study of “Cyber Laws in the Emerging Technology Scenario” will be the new focus which we should term the “Quantum Cyber Law Specialization” or “Futuristic Techno Legal Specialization”.

Naavi


Today I have taken one topic for discussion which is the interpretation of Section 65B of Indian Evidence Act (IEA) and to examine if Naavi’s Interpretation of Sec 65B survive the Superpositioning concept of Quantum Computing.

The legal and Judicial community has struggled to interpret the section even after 18 years of its existence and it would be a further challenge to interpret Sec 65B in the emerging quantum computing age. For a large part of these 18 years since Section 65B (IEA) came into existence,  few recognized its existence and hence there was nt much of a debate on the topic. It is only in the recent past that the community has started discussing the issue many times with a wrong perspective.

During most part of this time, Naavi’s interpretation of Section 65B was not seriously challenged. In the recent days there are a few law professionals who would like to interpret things differently. They may draw support from some Judges who are dishing out judgements without fully understanding the impact of their wrong decisions on the society. This tendency comes from the inability of some to un learn what they have learnt for the last 3 or 4 decades of their legal career. They are therefore uncomfortable with what the Supreme Court stated unambiguously in the Basheer Judgement and want to interpret things in their own way.

Naavi has been saying, wait… it took 14 years for Supreme Court to realize the existence of Sec 65B and it may take a few more years for the entire community to come to the same understanding which Naavi has been advocating since 2000.

In this connection, I have tried to give a thought to what will happen to my interpretations of Section 65B when Quantum Computing comes into play.

Quantum Computing is not an easy concept to understand even by specialists in Physics. Hence for the lawyers and judges to understand Quantum Computing would be understandably challenging. It is possible that I also may have to refine some of my own interpretations presented here and I reserve my right to do so. I will however explore all the Cyber Law challenges presented by the Quantum Computing. For the time being, I am only looking at the concept of “SuperPositioning” and its impact on Section 65B interpretation.

What is SuperPositioning

SuperPositioning is a concept in Quantum Computing.  In the classical computing scenario, a Bit can have a value of either 0 or 1. The Quantum Bit or Qubit can however have a value of 0 and 1 at the same time. When you measure the value, it will show either 0 or 1 but when you are not measuring it can hold two values simultaneously.

This “Dual State capability” of a Qubit may be fascinating for the scientist who swears by the concepts such as Heisenberg’s principle of uncertainty, multiple quantum energy levels of the electron in a hydrogen atom, quantum energy state of the nucleus of a Phosphorous atom, the direction of spinning of a sub atomic particle, light being both a wave and a particle at the same time, there being a parallel universe, time being a new dimension, Worm-hole being a tunnel to future, etc.,.

But to a judge who is looking for “Evidence beyond reasonable doubt” and for the criminal justice system where a witness is expected to answer only in the binary- “Yes” or “No”, the uncertainty inherent in the Quantum Computing will be a huge challenge.

In fact, at present we can state without battling an eyelid that if I stand on the witness box and start talking of the “SuperPositioning” and more specifically on the “Entanglement” aspects of Quantum Computing and how it requires a re-interpretation of Section 65B, I will be thrown out of the Court as some body who has lost his mind.

Since no body can throw me out of this blog, let me take the courage to proceed further and try to raise some issues which may be academic discussion points as of now but will be important for the Cyber Lawyers of the future.

But in the days to come, Cyber Law will be revised to accommodate the “Uncertainty Principle of an Electronic Document”. The time to recognize this concept has already come in respect of Section 65B.

Current Dilemma in Section 65B Yet to be resolved

From the years since ITA 2000 came into being and until the Supreme Court judgement in the P.K.Basheer case on 18th September 2014, there was little discussion on Section 65B of Indian Evidence Act (IEA) in the higher echelons of the Indian judiciary.

The decision of the Chennai AMM Court accepting the first Section 65B certificate issued by Naavi and convicting the accused in the historic Suhas Katti case (Refer here), was perhaps too insignificant in the eyes of the many senior advocates to take note of and hence was not noticed.

Since there were no debates in the august Supreme Court about Section 65B, “Eminent Advocates” who had gained their eminence through their expertise and years of work in “Non cyber law” domains such as Constitutional Law or Law of Evidence did not take time off to discuss the implications of Section 65B in right earnest. One opportunity that was presented in the case of Afsan Guru case in 2005 was lost because the case was a high profile case of terrorist attack against the Nation in which technical issues could not be given too much of importance. Hence when Mr Prashant Bhushan raised the technical issue of non availability of Section 65B certificate for some of the evidence, Court considered the other evidence before it and proceeded with the case.

This was interpreted as a rejection of “Mandatory requirement of Section 65B certificate” under Section 65B and became a precedent that prevailed until the Supreme Court over turned it in the P.K.Basheer case. 

However, Naavi continued to hold his forte and did not accept the Afsan Guru judgement in respect of mandatory requirement of Section 65B certificate for electronic evidence admissibility as correct.

We have discussed several the issues arising out of P.K.Basheer judgement both in naavi.org and ceac.in and readers may refer to them for more clarity.

We have held that the P.K.Basheer judgement has provided judicial support to most of the views of Naavi regarding Section 65B. There was only one aspect of the judgement where we have pointed out that a clarity remained to be exercised. It was in the view expressed in the judgement as follows:

“The situation would have been different had the appellant adduced primary evidence, by making available in evidence, the CDs used for announcement and songs. Had those CDs used for objectionable songs or announcements been duly got seized through the police or Election Commission and had the same been used as primary evidence, the High Court could have played the same in court to see whether the allegations were true. That is not the situation in this case. The speeches, songs and announcements were recorded using other instruments and by feeding them into a computer, CDs were made therefrom which were produced in court, without due certification.”

Naavi has consistently held that “Electronic Record” is a third type of evidentiary object that is different from “Oral” and “Documentary” as provided in Section 17 of IEA and should be considered as a special category whose admissibility is under the provisions of Section 65B alone.

While interpreting Section 65B, some of the “Eminent Non Cyber Law Jurists” have still not reconciled to the unlearning of the concept of “Primary Evidence” and “Secondary Evidence” where “Primary Evidence” lies inside a CD or a hard disk and “Secondary evidence” is a copy that is produced since primary evidence cannot be produced in the court.

In the electronic document scenario, the original document is a “Binary Expression”. The binary expression which we call as an “Electronic Document” is a sequence of bits which is present either in the form of magnetic states of a unit of a magnetic surface or as the depressions on a CD surface which reflect light in a manner different from its neighboring unit. The stream of such bits when read by a reading device associated with a software running on a hardware interprets the sequence of binary expressions as a “Text”, “Audio” or “Video” which we, the humans call as “Electronic Documents” and debate if it is “Primary Evidence” or “Secondary Evidence”.

The “Original Electronic Document” is an expression that can only refer to the first creation of a given sequence of bits which constitute an electronic document being interpreted as evidence. For example when a digital camera captures a picture, it first creates a sequence of bits in the RAM space. This is however not a recognized electronic document where it is in a state not “meant to be accessible so as to be usable for a subsequent reference”. (Sec 4 of ITA 2008).

When this sequence of bits gets transferred to  a “Stored Memory” in a device such as a “memory card” or a “hard disk” etc., that represents the first instance of the electronic document that came into existence. Before this, the magnetic/optical surface on which the document is recorded was in a  “Zero State”. Every bit on the surface was designated “Zero”. When the electronic document is being etched on the surface some of these “Zero” s were converted into “Ones” and the “Unique sequence created” was subject to a “Protocol”. This sequence of bits stored subject to a “Protocol” is what we call as “Original Document”.

But this “Original Document” has no meaning without being read in devices which understand the protocol and renders the information in a human understandable form. For example, if the image has been captured in a .txt or .doc or .mp3 or .avi or .mp4 or formats, then the electronic document has a sequence of zeros and ones which conform to the respective protocols. It is not possible to separate the protocol information from the electronic document itself and hence the document remains in a given format along with the protocol information.

When a reading device is presented with the electric/electronic impulses generated by such a sequence of bits, if the device is capable of interpreting the protocol, it will convert it into a humanly experience document which we may call as Text, Audio or Video which a judge can view and take action. If the device is not capable of understanding the protocol, the document would be rendered in an un-intelligible form. If it is a text, it will appear as gibberish, if it is an audio we may here a meaningless echo sound, if it is a video we may see only lines on the screen. If a sequence of bits need to be experienced by a human being, we must use a device which understands the protocol and converts the bits in a specific manner into an humanly readable/hearable/viewable form on a computer screen or a speaker.

So, even if in the Basheer case the original CD had been produced or in the case of Suhas Katti, the hard disk with yahoo.inc had been produced or in other cases, the memory card of a video camera is produced as “Original Evidence”, the judge can view it only if he uses a device which is configured to the protocol to which the sequence of bits corresponds. If the judge takes a view of the document as he is seeing on a computer, he is responsible for the protocols that have been used in rendering the sequence of bits to a humanly understandable document.

In a comparable environment, if a “Forged” signature is being questioned before a Court, the judge can himself view the signature and form his own opinion on whether the signature is forged or not. But prudence requires that the Court will ask another expert to give it a certificate whether it is forged or not so that the Judge does not become the witness and will only try to interpret the evidence with reference to the law.

The same principle applies to electronic documents viewed by a Judge without insisting on a Section 65B certificate from another.

This aspect was recognized by the magistrate Thiru Arul Raj of the Chennai AMM court in the Trisha defamation case referred to by me in my article on “Arul Raj, the Unsung Hero” (Refer here) in which the principle was laid down that even when the so called “Original” electronic document is before the Court, it has to be Section 65B certified by a third party.

In this background we can now appreciate why the Section 65B certificate requires that it has to be produced in the manner in which it is required to be produced namely

“identifying the electronic Documents rendered in the computer output”,

“Indicating the process by which the computer output was produced”,

“Providing certain warranties on the production of the Computer output” and

then considering the “Computer Output” as “Admissible Evidence” without the need for producing the original.

In this process the Certifier is stating that when he followed a certain protocol which is indicated in the certificate, he was able to view the electronic document in the form in which it has been presented in the computer output and he is responsible for the faithful reproduction of what he himself saw or heard into the format in which he has rendered the computer output.

I wish all eminent jurists including the Judges of Supreme Court go through the above multiple number of times to appreciate why I have been stating that Section 65B certificate can be produced by any third party (subject to a level of credibility) who has viewed the document and not necessarily the administrator of the device (as wrongly indicated in the SLP order in the case of Shafhi Mohammad).

This also underscores my view that in the case of electronic document, we always deal with the “Secondary Document” which  is a rendition of the original etching of the binary sequence and humans are incapable of viewing the “Original” which is a binary expression mixed up with the viewing protocol. We should stop comparing the “Computer Output” under Section 65B with a photocopy of a paper document and talk as if both are same.

Quantum Computing Era

Now, let us turn our attention to the main object of starting this post which was to look at Section 65B in the context of the emerging technologies such as “Quantum Computing”.

The legal professionals may find the earlier paragraphs hard enough to digest and may not have the stomach to start debating what would be Section 65B interpretation in the Quantum Computing era. May be this is too early to discuss the Cyber Law requirements for the emerging technologies since even scientists have tried to start understanding Quantum Computing only now.

But a “Futuristic Cyber Law Specialist” (whom we may also call “Quantum Cyber Law Specialist” or a “Futuristic Techno Legal Specialist”),  needs to tread a path which no body else has tread and therefore we shall continue our exploration.

We must realize that Quantum Computers are expected to work along with Classical computers and hence the current concepts of data storage in bits with “0 or 1” state may not vanish with the advent of Qubits with “0 and 1”. But data may be processed in an “Artificial Intelligence Environment” using “Quantum Computing” and presented in a classical computing environment.

In view of the above, Quantum computing will be part of the process but the  human interaction with the electronic document which will be certified as a computer output in a Section 65B certificate would be in a classical computer.

Additionally, “Quantum Computing” may sit in between two classical computing scenarios. For example, data may be captured by a classical computing system and become part of the “Big Data” which is processed by a Quantum Computing system and results rendered back in Classical computing environment.

Though the journey of the “Electronic Evidence” from birth as the “Original binary impressions on the first classical computing device passes through the “Worm-hole like” quantum computing environment, it comes back into the Classical computing environment when the Sec 65B certifier views it and converts it into a Computer output.

I therefore consider that Section 65B certification interpretation of Naavi will survive the Quantum Computing age. Lawyers may however raise certain forensic doubts regarding the reliability of an electronic document certified under the Section 65B and Forensic witnesses under Section 79A may need to answer them to the satisfaction of the Court.

However Section 65B certification being a matter of fact certification of what is viewed as a Computer output in the classical computer of the observer will not be vitiated by the complexities of the processes that go behind the scene.

Courts should understand that they are not entitled to confront the Section 65B certifier to a cross examination on the reliability of the back end processing systems as long as they are the standards the industry of computing adopts as technology.

I look forward to views from both my legal and technology friends regarding the above.

Naavi

Cyber Laws have been in discussion in India since around 1998 when the first draft was published. After the passage of Information Technology Act 2000, the laws came into existence and started affecting every one of our activities on computer including personal activities such as E Mails, Web activities, Mobile phone communication, etc as well as commercial activities such as  E banking, E Commerce, E Governance etc.

However after 20 years since the draft E Commerce Act 1998 was released by the Government of India, our Courts and Police as also the Lawyers are still struggling to understand and interpret the law. We therefore have difficulties in understanding Section 65B certification of electronic evidence, the legal implication of digital and e-sign, understanding certain crimes such as hacking,  the man in the browser attacks, Viruses, Trojans etc.

Indian judicial system however being an adversarial system, is capable of absorbing inadequate understanding and interpretation of law since the responsibility of the judge is to interpret evidence and arguments as presented by the parties. . At higher levels, Judiciary is comfortable with a state of inconsistency so that every judge takes his own decision based on what he understands of the law and leaves it to the higher judicial authority to correct mistakes if required.

This means, Garbage in Garbage out principle is applicable for our Judicial verdicts. This is acceptable to the Judicial system. But should it be also acceptable to the victims of bad judgements?…a point to ponder

In some strange way, being a country where citizens are tolerant of inefficiency and corruption in all affairs of the Government, Police and Judiciary, we simply shrug off a bad decision and move on.

But one thought comes across my mind when we observe some of the latest developments in technology around us.

First is the advent of  Big Data, Data Analytics, IoT, Artificial intelligence etc which are common discussion points today in the IT industry. We have been discussing what happens to the concept of “Privacy” when “Aadhar” is used as an Universal ID as if it is the biggest challenge before humanity. Silently however, Artificial Intelligence and humanoid robots have made their appearance which will create many new challenges to the Cyber Law makers and Cyber Law interpreters.

Some of the challenges in application of Cyber Law to the current technological developments have manifested in the domain of Banking and Finance. The debate on Block Chain technology Bitcoins, etc are issues that have presented the complications that the new technologies may be creating in the economic world. If a simple negligence in technology implementation in Banking such as not linking SWIFT messaging system to the CBS system, and providing access without robust security  in Banks can give raise to frauds worth thousand of crores and destabilize our economy and stock markets, we can imagine what kinds of upheavals may be caused in the society when the new technology developments such as Artifical Intelligence and humanoid robots take over key decision making process in say our Governance and Military operations.

Parellelly the manufacturing industry is also transforming itself into the Industry 4.0 state where Cyber Physical systems take over manufacturing processes with Artificial Intelligence and Data Analytics supporting the back end decision making process. The manufacturing industry is much less Cyber Law aware than the Banking and IT industry and hence the legal implications of frauds as well as the probability of frauds and crimes occurring in the manufacturing sector is much higher than in the Banking and IT industries.

I therefore anticipate a higher level of problems in the Manufacturing industry in India when the IT professionals try to push through “Disruptive Innovations” unmindful of the “Destructive Impact” on the society.

The Information Security focus therefore needs to be re-directed to address the requirements of the manufacturing industry even while we tackle the issues in the IT and Banking/Finance domains.

The fact that even after 20 years of introduction of Cyber Laws in India, our Legal and Judicial system is yet to understand the law and implement it in a consistent manner makes me wonder, how the Cyber law creators and Cyber Law interpreters would react when the new developments such as “Quantum Computing” becomes a reality.

A few month’s back, I remember that one technologist did ask me in a meeting if Indian Cyber Law is ready to face the challenges posed by Quantum Computing. Though I did state that a “Proper Interpretation” of the current laws could help us interpret the laws whether the information is processed in a classic computer system where data is stored in “Binary” language or in Qubits where the data is stored or processed differently, considering the inability of the system to understand even the current system of laws, it appears as if my optimism may perhaps be misplaced.

For those who struggle to interpret an electronic document created as a sequence of binary interpretation of the state of a transistor, it would almost be impossible to even imagine that a “Transistor” will now be replaced by a “Quantum Energy State” which can take the uncertain  value  of one or zero or both. In such a situation if a hacker has manipulated the back end process and generated a fraudulent output, how do we recognize the “Unauthorized Manipulation of data”, “how do we produce forensic evidence of the manipulation” etc will be a challenge that is not easy to solve.

Add to this “Super positioning” prospect in Quantum computing to the “Entanglement” concept where two states of a data holder can be in physically separated but the state of one could be modified by changing the other, the problem becomes more fuzzy.

If nothing else is certain, the quantum increase in the computing powers of the future generation of computers (working as back end systems driven by quantum computing processing) would need a change in our perception of “Probability of a Cryptographic key being broken”. If the current key strengths become unreliable, we may need to re-think on many of the concepts of information security and make corresponding changes in out laws.

Even today, the Criminal Jurisprudence principle that all evidence should be “Proved  beyond Reasonable Doubt” poses huge challenges when applied to Electronic Evidence. In the Quantum computing era, such issues would be even more challenging.

If therefore we want to upgrade our Cyber Laws from the current state of Cyber Law 1.0 to the era of Artificial intelligence which could be Cyber Law 2.0 and subsequently to the era of  Quantum Computing which could be called Cyber Law 3.0, then our Cyber Law makers need to start acting today in understanding the problems that the new technologies will pose to our Judges who are now in the very initial stages of appreciating the current version of Cyber Law.

Will the Government understand the challenge that the emerging technology in Computer software and hardware will pose?… if so…. when? ….is the question that remains unanswered in my mind.

I welcome the view of the readers… if any

Naavi