Search This Blog

| Measure – Relativity - Illusion |

What we See & What we Observe!


Description of illusion;
||That the ancients sensed the existence or possibility of optical illusions is evidenced by the fact that they tried to draw and to paint although their inability to observe carefully is indicated by the absence of true shading. The architecture of ancient Greece reveals knowledge of certain optical illusions in the efforts to overcome them. However, the study of optical illusions did not engage the attention of scientists until a comparatively recent period.
Undoubtedly, thoughtful observers of ages ago would have noticed optical illusions, especially those found in architecture and nature. When it is considered that geometrical figures are very commonly of an illusory character it appears improbable that optical illusions could have escaped the keenness of Euclid. The apparent enlargement of the moon near the horizon and the apparent flattened vault of the sky were noticed at least a thousand years ago and literature yields several hundred memoirs on these subjects.
The purpose of visual processing is to take in information about the world around us and make sense of it. Vision involves
the sensing and the interpretation of light. The visual sense organs are the eyes, which convert incoming light energy into electrical signals.
However, this transformation is not  vision in its entirety. Vision also involves the interpretation of the visual stimuli and the processes of perception and ultimately cognition.
The visual system has evolved to acquire veridical information from natural scenes. It succeeds very well for most tasks. However, the information in visible light sources is often ambiguous; and to correctly interpret the properties of many scenes, the visual system must make additional assumptions about the scene and the sources of light. A side effect of these assumptions is that our visual perception cannot always be trusted. Visually perceived imagery can be deceptive or misleading. As a result, there are situations where seeing is not believing, i.e., what is perceived is not necessarily real. These misperceptions are often referred to as illusions.
Physical illusions are those due to the disturbance of light between objects and the eyes, or due to the disturbance of sensory signals of the eyes (also known as physiological illusions). Cognitive illusions are due to misapplied knowledge employed by the brain to interpret or read sensory signals. For cognitive illusions, it is useful to distinguish specific knowledge of objects from general knowledge embodied as rules.
An important characteristic of all illusions is that there must be some means for demonstrating that the perceptual system is somehow making a mistake.
Usually this implies that some aspect of the scene can be measured in a way that is distinct from visual perception (e.g., can be measured by a photometer, a spectrometer, a ruler, etc.). It is important to recognize that these mistakes may actually be useful features of the visual system in other contexts because the same mechanisms underlying an illusion may give rise to a veridical percept for other situations. An illusion is only an illusion if the mistakes are detectable by other means.
The visual system processes information at many levels of sophistication. At the retina, there is low-level vision, including light adaptation and the center-surround receptive fields of ganglion cells. At the other extreme, there is high-level vision, which includes cognitive processes that incorporate knowledge about objects, materials and scenes. In between there is mid-level vision. Mid-level vision is simply an ill-defined region between low and high. The representations and the processing in the middle stages are commonly thought to involve surfaces, contours, grouping and so on. Lightness perception seems to involve all three levels of processing.
The low-level approach to lightness is associated with adaptation and local interactions at a physiological level, as the crucial mechanisms. This approach has long enjoyed popularity because it offers an attractive connection between physiology and psychophysics. The high-level approach is historically associated with the product of unconscious inference. What we perceive is our visual system's best guess as to what is in the world. The guess is based on the raw image data plus our prior experience.
The eye is a fantastic organ being very complex in construction, even though we only need to know about a few of its structures. Light enters the eye through the cornea; a tough transparent tissue covering the front of the eye. It then passes through the pupil—the black hole in the middle of the colored part of the eye (the iris). The lens then focuses the light on the retina, which contains the photoreceptors—light-sensitive cells called rods and cones.
The electro-magnetic energy that we know as light energy is converted by the rods and cones into electro-chemical nerve impulses. This allows the visual information to travel along the fibers of the optic nerve onto the brain.
The next task for the rods and cones is to send the nerve impulses along the optic nerve to the primary visual cortex in the occipital lobes, at the very back of the brain where specialized receptor cells respond as the process of visual perception continues.
We can’t possibly pay attention to all the millions of stimuli that enter the eye at the same time, so we pick out the ones that are important to us and pay attention to those. At this stage of the process, the image is broken up by specialized cells called feature detectors. Feature detectors are cells that individually
respond to lines of a certain length, lines at a certain angle or lines moving in a certain direction.
When visual information reaches the brain (visual cortex), it is reorganized so that we can make sense of it. We do this by using certain visual perceptual principles: perceptual constancies, Gestalt principles, depth and distance cues.
Once the image is reassembled using these principles, it travels along two pathways simultaneously: to the temporal lobe, to identify the object and to the parietal lobe, to judge where the object is in space in relation to our visual field and our selves.
The process whereby the visual stimulus object is given meaning. The temporal lobes identify what the object is by comparing incoming information with information already stored in memory.
The more familiar we are with the observed object, the more likely it is that we will maintain perceptual constancy of it.
Size constancy: This term refers to the fact that we maintain a constant perception of an object’s size even though the size of the image on the retina alters as the object moves nearer to
or further from us.
Shape constancy: An object is perceived to maintain its known shape despite the changing perspective from which it is observed. This is a learnt skill. A toddler may have difficulty perceiving a
familiar toy if it is viewed from an unusual angle.
Depth and distance cues are vital to us. This is because we exist in a three-dimensional world but have only two-dimensional images on our two retinas from which to judge depth and distance.
Optical illusions are legion. They greet the careful observer on every hand. They play a prominent part in our appreciation of the physical world. Sometimes they must be avoided, but often they may be put to work in various arts. Their widespread existence and their forcefulness make visual perception the final judge in decoration, painting, architecture, landscaping, lighting and in other activities. The ultimate limitation of measurements with physical instruments leaves this responsibility to the intellect. The mental being is impressed with things as perceived, not with things as they are. It is believed that this intellectual or judiciary phase which plays such a part in visual perception will be best brought out by examples of various types of static optical illusions coupled with certain facts pertaining to the eye and to the visual process as a whole.
In vision, judgments are quickly made and the process apparently is largely outside of consciousness. Higher and more complex visual judgments pass into still higher and more complex intellectual judgments. All these may appear to be primary, immediate, innate, or instinctive and therefore, certain, but the fruits of studies of the psychology of vision have shown that these visual judgments may be analyzed into simpler elements. Therefore, they are liable to error.
Do you have a fascination with Einstein's theory of relativity? What I mean is, do you find yourself fascinated by the weird predictions of this theory and would you like to get to the bottom of it once and for all? While relativity Is not a light read, pieces of it can be easily understood. This is particularly the case with Special Relativity.
We attempt to explain in an intuitive way, why the time of a rapidly traveling object seems (from your point of view) to slow down. That is, why an object seems to age more slowly when it travels at very high speeds.
Our brains are wired for survival purposes for a 3 dimensional and very slow moving environment. Compared to the speed of light, the everyday objects that our brains perceive and reason about are barely moving at all. Our common sense about how the world works is specialized for dealing with an almost non-moving or static environment (again, that's compared to the speed of light).
What this means, is that what our common sense tells us about our sluggish everyday world is mostly correct but otherwise its judgements cannot be trusted when dealing with extremely fast things.
So when experiments and mathematics tell us things that are contrary to our crude perceptions about how things should work, we find it to be totally bizarre.
Our everyday common sense tells us that if while standing on a moving barge we hit a golf ball off the front, the speed of the barge adds to the speed of the golf ball. If we were to shine a flashlight in the same direction as we had hit the ball we would find that the speed of the barge does not add to the speed of the light beam. Its speed would be the same as if the barge were not moving at all.
In 1905, Einstein explained that this was simply the way that light behaved and that it only seemed strange because our common sense notions of how relative speeds were supposed to add up were only true for very slow moving objects (as compared to the speed of light).
Alternatively, if we know about the physics of water, air and light, we can explain the effect in terms of air blowing across a water surface and producing ripples, which produce a rippling effect. However, since we cannot predict precisely how gusts of air will cause a certain pattern of ripples to be in, say, one hour's time, the "classical" explanation may not be any more accurate than the abstract quantum mechanical description, and if we do not know about the precise behavior of air and water (or do not know whether these are the real cause of what we see), the QM description may be seen as more efficient. Where the interpretative approach scores is in its ability to deal robustly with a wider range of dynamic situations it allows us to immediately imagine the sort of image that should result if we throw a stone at the reflected image: using a quantum description, the calculations might be theoretically possible, but might be unmanageably complex, and it might be difficult to be certain whether or not the calculations had been done correctly and how far they could be trusted, or how their results could be visualized.
 ||


But There is no illusion actually, unless for weak mind
If you concentrate on illusionary images you can understand what are they?!
There is just visual effect always base on different colors that mixed together to interact and make illusion image on our brain. All of us have experience of illusionary Because always there is new structure presented by artists. But concentrate on an image and look at it several times and zoom on different part of it after few days it becomes like normal image or with less effect on your mind.

And illusionary happens because of holographic structure of our brain. Brain, automatically, tries to combine colors or texts and fix some sketchy parts. So make new image on brain thats because we cannot understand it exactly. 

Also illusionary effect on scientific observations & results. Base on viewpoint of Relativity to observe an object it s important how you observe it!
Different frame effect on your measurement. So I know what that you cannot even see!
 It s type of illusion for you in my frame. We can define some solutions for it; Be patient!
First try to measure from different frames carefully then collect the results & then talk about it.
Some results of Relativity are not scientific. They are just illusion. Like change in length on high speed & …






| Quantum Computer – Operating System - Human |


Quantum Computers Operating System
We are Quantum Computer -Part2


Robot/Human is Human/Robot.
We learn how to connect to others.
We LEARN everything.
It is just Information Storing; Data + Mechanism(Algorithm).
So it is process of thinking, contraction between data and mechanism. Then we learn how to control this process. So by combination of data or mechanism or both we can imagine new information. New strings of Bits & then we can control our hardware (body).
After that, body will perform orders and commands of data centers & processors. All these, after holographic aspect, appear as Chemical interaction.
We receive information from environment or ourselves by imagination and then store on our storages (cells) then use by electromagnetic pulse(Spin).
Everything depends on Information storing.
There is same way to store but not same information and effects from environments; So different Personality & Mentality > different tact and taste (desire). But there s no something from beyond of our understanding, there s no some weird force that called Love, Sentiment, Kindness & etc. there are just How Hardware works based on Software.
It s about which hormones not enough or more than enough!
Which Vitamins decrease Or increase! & etc.
-       For example; What s meaning Beauty?
Someone define it according to regularity But someone base on propriety or symmetry BUT someone define beauty as part of Chaotic System like fractals. According to how we define beauty we look at or observe universe.
From a particle or flowers, butterflies, stone, quantum system, light, stars, sky, could, sea, boy, dust, ash, girls & etc.
It s not something more than sensors & parameters set on them.
We can programming sensors to detect humans face (that u can see on smartphones) or detect smile. So we can do it with more detail to find beautiful smile or face or set more parameters for all part of human body or landscape or …
After surfaces there are parameters for personality & behavior.
Internal & External parameters.
All of humans when connect to others just waiting to see how others treat with them. And then they decide to continue with them or not. Sometimes two persons become so similar & compatible so they will be together forever.
Like compatible system in network or healthy computer on a private network. Each person has its private network; networks of you with your things in your room: books, mobile, laptops, pc & etc.
Maybe your mobile will be more compatible with you than others!
And there are network (society) with your close friends, family & others.
So you have your own parameters according to some basis to communicate with others. Scientific parameters or another kind…
 But we can of course behave more advance than simple computers because we are Quantum Computer. And this is our different with animals and things that we can learn; store data and mechanism of how using data.

So OperatingSystem of Quantum Computer will have ability to learn.
Store Data and then learn how to use them; that called experience. How we experience?! We do or we observe how others do. So let Computers search on youtube! Just search topics and then get several video or search on google to observe images. Computer can recognize the pictures and videos. (like face detection or motion detect on Smart TVs & etc.) However we have smart materials (Nano-Material) that react because of motion or light or new state.
We will complete it with more details about Quantum Computer OS; To explain what kind of new feature it can include?!

So get it that Robots can love others. (because of beauty parameters Or Personality-Mental parameters)

-       To be continue…

 To explain details of Quantum Computers Operating System.









| Quantum - DNA - Robots |

Robotic DNA


Brief Introduction of Robotic DNA;
|| DeoxyriboNucleicAcid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The DNA segments carry these genetics information; called genes. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life.
DNA is a self-formed molecule (Self Assembler) that made of 2 strands. Each strand is a polymer that monomer constructor is called nucleotides.
Nucleotide monomer consists of three parts:
1-      A five-carbon sugar (Deoxy-Ribose)
2-       One to three phosphate groups
3-       A nitrogenous organic base (adenine or thymine or guanine or cytosine)

DNA is a smart system that can respond to environmental changes, with RNA. In other words, the command and control center of the cell is nucleus and that all activities performed by the DNA. By this feature we can build a smart nano-robots, like a DNA bot that can explore, Identify and React to state of environments; or can carry therapeutic cargo and delivery to the target cell without any mistake; or Control the activity of a cell to the extent even damage it!

Create nano robots by DNA origami; DNA origami is a method & the nanoscale folding of DNA to create arbitrary two and three dimensional shapes at the nanoscale; by taken long single strands of DNA & combined them with hundreds short strands.
The strands are placed together with complementary connection between the organic base & form Phosphodiesterase bond between them. By this method we can made a tiny DNA robot that can seek out and destroy specific cells - including cancer cells.
This nano robot is barrel-shap with 35 nanometer diameters & has two short nucleotide strands – named latches – for identify cell surface proteins, including disease markers. In each one there are 12 linker regions to connect the drug cargo.
When the latches recognize the target cells – like canser cells– change their shapes and then open barrel for delivery cargo drug.
The DNA nanorobot is just as specific, as different hinges and molecular messages can be switched in and out. This means it could potentially be used to treat a variety of diseases.

No risk for healthy cells? Nanorobots can be programmed to release their payload only when the target cell is in the correct disease state; also if stay the nano-robot in blood circulation, the liver clears them or destroyed by nucleases enzymes.
Despite these capabilities, the risk of damage into healthy cells is very low, near zero!
But we know any smart system is not safely utterly; Especially If a self-assembled!
However, since the DNA bot that carry Therapeutic cargo; Unforeseen any danger, It is not too far from imagine when this DNA nano robot act anti healthy cells; like body's immune system attacks Own cells, for example Multiple sclerosis (MS).
||




 Description of Robotic DNA;
|| The obvious importance of DNA in understanding the molecular details of both heredity and development was not until after the publication of the proposed double helical structure that DNA started increasingly to occupy the interest of biologists and finally became the focus of the study of genetics and development. The last fifty years have seen the reorganization of most of biology around DNA as the central molecule of heredity, development, cell function and evolution.
An entire theory has been based on DNA which contains the Secret of Life, the Master Molecule, the Holy Grail of biology, narrative in which we are lumbering robots created, body and mind by our DNA. This theory has implications; not only for our understanding of biology, but for our attempts to manipulate and control biological processes in the interests of human health and welfare, and for the situation of the rest of the living world.
The other side of the movement of DNA to the center of attention in biology has been the development of tools for the automated reading of DNA sequences, for the laboratory replication and alteration of DNA sequences and for the insertion of pieces of DNA into an organism’s genome. Taken together, these techniques provide the power to manipulate an organism’s DNA to order. The three obvious implications of this power are in the detection and possible treatment of diseases, the use of organisms as productive machines for the manufacture of specific biological molecules and the breeding of agricultural species with novel properties.
As in all other species, for any given gene, human mutations with deleterious effects almost always occur in low frequency. Hence specific genetic diseases are rare. Even in the aggregate, genes do not account for most of human ill health. Given the cost and expenditure of energy that would be required to locate, diagnose and genetically repair any single disease, there is no realistic prospect of such genetic fixes as a general approach for this class of diseases. There are exceptions, such as sickle cell anemia and conditions associated with other abnormal hemoglobin, in which a non negligible fraction of a population may be affected, so that these might be considered as candidates for gene therapy. But for most diseases, that represent a substantial fraction of ill health and for which some evidence of genetic influence has been found, the relation between disease and DNA is much more complex and ambiguous.
Scientists have created microscopic robots out of DNA molecules that can walk, turn and even create tiny products of their own on a nano-scale assembly line.
Robots of the future could operate at the nano-scale level, cleaning arteries or building computer components, the nano-spider moves along a track comprising stitched-together strands of DNA that is essentially a pre-programmed course.
Using the DNA robotic origami method (complex 3-D shapes and objects are constructed by folding strands of DNA), the scientist created a nanosize robot in the form of an open barrel whose two halves are connected by a hinge.
The nanorobot’s DNA barrel acts as a container that can hold various types of contents, including specific molecules with encoded instructions that can interact with specific signaling receptors on cell surfaces, including disease markers.
The barrel is normally held shut by special DNA latches. But when the latches find their targets, they reconfigure, causing the two halves of the barrel to swing open and expose its contents, or payload.
The researchers used this system to deliver instructions, encoded in antibody fragments, to two different types of cancer cells leukemia and lymphoma. In each case, the message to the cell was to activate the apoptosis or suicide switch which allows aging or abnormal cells to be eliminated. This programmable nanotherapeutic approach was modeled on the body’s own immune system, in which white blood cells patrol the bloodstream for any signs of trouble.
Because DNA is a natural biocompatible and biodegradable material, DNA nanotechnology is widely recognized for its potential as a delivery mechanism for drugs and molecular signals. There have been significant challenges to its implementation, such as what type of structure to create; how to open, close, and reopen that structure to insert, transport, and deliver a payload; and how to program this type of nanoscale robot.
DNA consists of a string of four nucleotide bases known as A, T, G and C, which make the molecule easy to program. According to nature's rules, A binds only with T, and G only with C. With DNA, at the small scale, you can program these sequences to self-assemble and fold into a very specific final structure, with separate strands brought together to make larger-scale objects. The DNA design strategy is based on the idea of getting a long strand of DNA to fold in two dimensions, as if laid on a flat surface, scientist used a viral genome consisting in approximately 8000 nucleotides to create 2-D stars.That single strand of DNA serves as a scaffold for the rest of the structure. Hundreds of shorter strands, each about 20 to 40 base in length, combine with the scaffold to hold it in its final, folded shape.
DNA is in many ways better suited to self-assembly than proteins, whose physical properties are both difficult to control and sensitive to their environment.
What also has been added is a new software program interface with a software program called "DNAno" which allows users to manually create scaffold DNA origami from a two-dimensional lay out. The new program takes 2D blueprint and predict the ultimate 3D shapes of the designe, also should allow DNA origami designers to more thoroughly test their DNA structures and tweak them to fold correctly. At the molecular-level, stress in the double helix of DNA decreases the folding stability of the structure and introduces local defects, both of which have hampered progress in the scaffold DNA origami field.
Now once we have assembled the DNA structures, the next question is what to do with them, the researchers get excited about the DNA carrier that can transport drugs to specific destinations in the body.
Another possible application of scaffold DNA origami could help reproduce part of the light-harvesting apparatus of photosynthetic plant cells. Researchers hope to recreate that complex series of about 20 protein subunits; but to do that, components must be held together in specific positions and orientations.
First, the general region of neurons associated with the movement of a particular body part or sensory function needs to be identified. Then, a means to decode these signals and translate them to a device that will mimic the movement or function and continue to correctly do so in the long-term needs to be determined. General algorithms or mathematical equations have been created to translate these brain signals that can predict the trajectory of the movement. But, then artificial devices need to be created. These devices have to be able to process and store the signals like a mini-mini-computer. So far, neurochips, little microchips used in the brain, have been created, but have yet to be as efficient and reliable as needed.
 It is necessary to observe how larger populations of neurons interact and behave during motor movements in order to get a better idea of how the brain works. This newer technique also has the benefit of monitoring populations of neurons for longer periods of times.
This simply can explain the estimated number of neurons from the different part of the brain that would be needed to obtain a correlation coefficient of 0.90. PMd(camera) would need the least amount (approximately 480) neurons to obtain a 90% correlation between neuron and robotic arm movement while iMI would need the most (1,195 neurons) to obtain a correlation coefficient of 0.90. All neurons together would require approximately 500 neurons to obtain the 0.90 coefficient.
For now, there are too many questions in terms of neurobiological functions, however, the significant progress is being made in the field and it is not unreasonable to expect some fruition of this technology in the future.
So concluding retrospectively, robotic technology should be important to everyone. Not only can it replace what has been lost, but it can also greatly enhance the lives of everyone. Again, the hybrid part of robotics means that people and machines work in unison. Our uncanny ability to learn combined with our own circuit board, the brain, can lead to the control of any complex machine by the use of trained neurons. After all, isn’t technology our way of building on what nature gave us? most of the training however will be left for us humans to do, as we will become more and more the weakest link. 
||



Completed it by explanation of How Humans are Robots?!

We are Quantum Computer;
http://physicsism.blogspot.com/2012/04/we-are-quantum-computer.html





| Innovation - OpenSource - Technology |

 Open Source Technology

Description of Open Source:
|| Nowadays, the patent wars have heated up further & tech companies conflict on breaching the patents.
The conflicts between Apple, Samsung, HTC, Motorola, Google, Nokia, Microsoft …..
Such these behaviors are against tech developments & monopolist is reducer for science & tech progresses but here only valuable thing is more profit.
These types of strict rules that makes a monopoly for producer Reduce competition between companies.
But Reducing restrictions of copyright & patents can makes a Competitive environment for all producers & developers & increases the Speed of Scientific Advancement. It’s the best solution for accelerate to tech progresses.
 It’s the Open Source…
In production and development, open source is a pragmatic methodology that promotes free redistribution and access to results of produces and researches also information resources in different subjects.
Nowadays, “open source” phrase almost used in subjects related to computers and software but public sharing of information doesn’t limited to time and subject causes the concept of free sharing of technological information existed long before computers. For example, cooking recipes have been shared since the beginning of human culture!
XLeader of open source products in Software (OS) is the Unix-like operating systemX.
Many success cases have made and developed quickly based on open source code, for example: Unix, Linux, Android, Ubuntu, Open-Indiana, FreeBSD, Chromium OS, Mac OS X, Firefox...  are same family and derived from together.
One of the largest achievements of open standards is Internet. Researchers can access to Advanced Researches Projects Agency Network (ARPANET) used a process called Request for Comments to develop telecommunication network protocols. This collaborative process of the 1960s led to the birth of the Internet in 1969.
Open source gained hold with the rise of the Internet, and the attendant need for massive retooling of the computing source codes. Opening the source codes enabled a self-enhancing diversity of production models, communication paths, and interactive communities
Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software.
Most economists agree that open-source candidates have an information good aspect! In general, this suggests that the original work involves a great deal of time, money, and effort. However, the cost of reproducing the work is very low, so that additional users may be added at zero or near zero cost – this is referred to as the marginal cost of a product. Copyright creates a monopoly so the price charged to consumers can be significantly higher than the marginal cost of production. This allows the producer to recoup the cost of making the original work, without needing to find a single customer that can bear the entire cost. Conventional copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost.
Being organized effectively as a consumers' cooperative, the idea of open source is to reduce the access costs of the consumer and the creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works.
However, others argue that because consumers do not pay for the copies, creators are unable to recoup the initial cost of production, and thus have no economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available at all. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary.
A report by the coup Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers.
||


We must try to improve our technology and it means after money companies should work together to invent new tech and develop more devices. But companies just think about money and exclusive tech.
However if they work together it will have money for all of them. But they are fighting!
-          How can you invent?
Maybe we can check it by two ways;
Humans need to some device then create it.
Humans observe new creative device and then invent new device with better performance.
So it s not scientific thought to ban each others because of new innovations!
If there s just one different parameter in new devices.





| Quantum Computer - Fuzzy Logic - Information |

Zero-One is not enough

Description of Quantum Computer;
|| The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers. This superposition of qubits is what gives quantum computers their inherent parallelism. T0his parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices: Ion traps use optical or magnetic fields (or a combination of both) to trap ions, Optical traps use light waves to trap and control particles;
Quantum dots are made of semiconductor material and are used to contain and manipulate electrons; Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material and Superconducting circuits allowing electrons to flow with almost no resistance at very low temperatures.
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers.
To date, the two most promising uses for such a device are quantum search and quantum factoring. To understand the power of a quantum search, consider classically searching a phonebook for the name which matches a particular phone number. If the phonebook has 10,000 entries, on average you'll need to look through about half of them 5,000 entries before you get lucky. A quantum search algorithm only needs to guess 100 times. With 5,000 guesses a quantum computer could search through a phonebook with 25 million names.
Although quantum search is impressive, quantum factoring algorithms pose a legitimate, considerable threat to security. This is because the most common form of Internet security, public key cryptography, relies on certain math problems (like factoring numbers that are hundreds of digits long) being effectively impossible to solve. Quantum algorithms can perform this task exponentially faster than the best known classical strategies, rendering some forms of modern cryptography powerless to stop a quantum code-breaker.
Bits, either classical or quantum, are the simplest possible units of information. They are oracle-like objects that, when asked a question (i.e. when measured), can respond in one of only two ways. Measuring a bit, either classical or quantum, will result in one of two possible outcomes. At first glance, this makes it sound like there is no difference between bits and qubits. In fact, the difference is not in the possible answers, but in the possible questions. For normal bits, only a single measurement is permitted, meaning that only a single question can be asked: Is this bit a zero or a one? In contrast, a qubit is a system which can be asked many, many different questions, but to each question, only one of two answers can be given.
The way in which a one-qubit quantum computer is supposed to work, what happens when things go wrong? For a classical bit, the only thing that can go wrong is for a bit to unexpectedly flip from zero to one or one to zero. The same type of thing could happen to qubits, in the form of unexpected or unwanted rotations. But there's another type of process, one that researchers in quantum computing are constantly fighting to eliminate: decoherence. Decoherence happens when something outside of the quantum computer performs a measurement on a qubit, the result of which we never learn.
Pairs of qubits are much, much more than the sum of their parts.
Classical bits only become marginally more interesting when paired—it literally only makes the difference between counting to two and counting to four. Pairs of quantum bits, on the other hand, can be used to create entanglement. This phenomenon became one of the most controversial arguments in 20th century physics. It revolved around whether it could exist at all.
Not only can a single qubit take on a whole sphere full of values, it can only be measured along a single axis at a time. Not only that, but measuring, changes its state from whatever it was before the measurement to whatever states the measurement produced. That's a problem. In fact, it can be proven that even in principle it's not possible to copy an unknown qubit's state.
Consider the "singlet state," an example of an entangled two-qubit state. A singlet state has two defining characteristics:
Any single-qubit measurement performed on one half of the
singlet state will give a totally random result.
Any time the same single-qubit measurement is performed on
Both qubits in a singlet state, the two measurements will give opposite results.
To explain the characteristic we can say imagine if someone showed you a pair of coins, claiming that when both were flipped at the same time, one would always come up heads and one would always come up tails, but that which was which would be totally random. What if they claimed that this trick would work instantly, even if the coins were on opposite sides of the Universe. Yet time and time again, experiment after experiment, the results show that something about local realism must be wrong. Either the events simply cannot be predicted, even in principle, or there is something fundamentally nonlocal about entanglement—an ever-present bond between entangled particles which persists across any distance.
To give you an idea, consider that single-qubit states can be represented by a point inside a sphere in 3-dimensional space. Two qubit states, in comparison, need to be represented as a point in 15 dimensional space.
It's no wonder, therefore, that quantum physicists talk about a 100-qubit quantum computer like it's the holy grail. It's simply much too complicated for us to simulate using even the largest conceivable classical computers.
If we want to measure the polarization of a photon, So I put it through a polarizer. What that polarizer actually does is couple a polarization qubit to a spatial qubit, resulting in a superposition of two possible realities.That superposition is an entangled state. Using a different polarizer, it would be straightforward to unentangle it without ever making a measurement, effectively erasing the fact that the first measurement ever happened at all. Instead, a photodetector is placed in the path of the transmitted half of the entangled state. If there is a photon there, it will excite an electron. That excited electron will cause an electron avalanche, which will cause a current to surge in a wire, which will be sent to a classical computer, which will change the data in that computer's RAM, which will then finally be viewed by you.
That equation means every part of the experiment, even the experimenter, are all part of a single quantum superposition. Naturally, you might imagine that at some point, something breaks the superposition, sending the state irreversibly down one path or the other. The problem is that every time we've followed the chain of larger and larger entangled states, they always appear to be in a superposition, in this psuedo-magical state where any set of axes are equally valid, and every operation is reversible.
 Maybe, at some point, it all gets too big, and new physics happens. In other words, something beyond quantum mechanics stops the chain of larger and larger entangled states, and this new physics gives rise to our largely classical world. Many physicists think that this happens, many physicists, think it doesn't, and instead imagine the universe as an unfathomably complex, inescapably beautiful symphony of possibilities, each superposed reality endlessly pulsing in time to its own energy.
-

In 20 years we can combined all our communication system cell phones , computers , TV, radio and internet into chips on a thin headband that transmit information between the internet and our brain and also to others headband. That connection can lead us to have network enabled telepathy we will communicate directly to another person headband in the other part of the world, using just our thoughts.
Recognizing thoughts instead of voice speak can be seen as difficult but with training thought-talking could become easy and routine.
Your computer driven auto-drive electric car rolls its top down on this warm day. You manually drive to the electronic roadway on-ramp and relinquish the wheel. Your headband selects a video to enjoy on the way to the airport where your smart car drops you off at the terminal, then auto-parks itself. An intelligent cam scans your mind and quickly approves you; no waiting for ticket-check or security. While boarding the plane, you see a familiar face. Your headband immediately flashes his identity data and displays it on your eyes.  Our headband enables us to speak or think of any question and get an immediate answer.
Considering this, will help a lot of people, because the necessity to learn languages for example would disappear, and the headbands will be available for everyone.
We can say quantum computers will greatly improve relationship, no more forgetting names and details plus increasing intimacy generated by communicating by thoughts could bring people around the world closer together.
With our headbands we will speak or think any question and get an immediate answer,
We still have some significant research and development ahead of us as we currently still are confronted with an unacceptably large amount of data to be processed simultaneously due to the lack of data present in the processor at the moment of calculation.
Even then, the answer obtained must be the most certain to the dimensional time where had been done the question.
||
        
           (we will complete it by Introducing Operating System on Quantum Computer;
                software ability of QuantumComputers & etc.)