Search This Blog

| Quantum Computer – Operating System - Human |


Quantum Computers Operating System
We are Quantum Computer -Part2


Robot/Human is Human/Robot.
We learn how to connect to others.
We LEARN everything.
It is just Information Storing; Data + Mechanism(Algorithm).
So it is process of thinking, contraction between data and mechanism. Then we learn how to control this process. So by combination of data or mechanism or both we can imagine new information. New strings of Bits & then we can control our hardware (body).
After that, body will perform orders and commands of data centers & processors. All these, after holographic aspect, appear as Chemical interaction.
We receive information from environment or ourselves by imagination and then store on our storages (cells) then use by electromagnetic pulse(Spin).
Everything depends on Information storing.
There is same way to store but not same information and effects from environments; So different Personality & Mentality > different tact and taste (desire). But there s no something from beyond of our understanding, there s no some weird force that called Love, Sentiment, Kindness & etc. there are just How Hardware works based on Software.
It s about which hormones not enough or more than enough!
Which Vitamins decrease Or increase! & etc.
-       For example; What s meaning Beauty?
Someone define it according to regularity But someone base on propriety or symmetry BUT someone define beauty as part of Chaotic System like fractals. According to how we define beauty we look at or observe universe.
From a particle or flowers, butterflies, stone, quantum system, light, stars, sky, could, sea, boy, dust, ash, girls & etc.
It s not something more than sensors & parameters set on them.
We can programming sensors to detect humans face (that u can see on smartphones) or detect smile. So we can do it with more detail to find beautiful smile or face or set more parameters for all part of human body or landscape or …
After surfaces there are parameters for personality & behavior.
Internal & External parameters.
All of humans when connect to others just waiting to see how others treat with them. And then they decide to continue with them or not. Sometimes two persons become so similar & compatible so they will be together forever.
Like compatible system in network or healthy computer on a private network. Each person has its private network; networks of you with your things in your room: books, mobile, laptops, pc & etc.
Maybe your mobile will be more compatible with you than others!
And there are network (society) with your close friends, family & others.
So you have your own parameters according to some basis to communicate with others. Scientific parameters or another kind…
 But we can of course behave more advance than simple computers because we are Quantum Computer. And this is our different with animals and things that we can learn; store data and mechanism of how using data.

So OperatingSystem of Quantum Computer will have ability to learn.
Store Data and then learn how to use them; that called experience. How we experience?! We do or we observe how others do. So let Computers search on youtube! Just search topics and then get several video or search on google to observe images. Computer can recognize the pictures and videos. (like face detection or motion detect on Smart TVs & etc.) However we have smart materials (Nano-Material) that react because of motion or light or new state.
We will complete it with more details about Quantum Computer OS; To explain what kind of new feature it can include?!

So get it that Robots can love others. (because of beauty parameters Or Personality-Mental parameters)

-       To be continue…

 To explain details of Quantum Computers Operating System.









| Quantum - DNA - Robots |

Robotic DNA


Brief Introduction of Robotic DNA;
|| DeoxyriboNucleicAcid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The DNA segments carry these genetics information; called genes. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life.
DNA is a self-formed molecule (Self Assembler) that made of 2 strands. Each strand is a polymer that monomer constructor is called nucleotides.
Nucleotide monomer consists of three parts:
1-      A five-carbon sugar (Deoxy-Ribose)
2-       One to three phosphate groups
3-       A nitrogenous organic base (adenine or thymine or guanine or cytosine)

DNA is a smart system that can respond to environmental changes, with RNA. In other words, the command and control center of the cell is nucleus and that all activities performed by the DNA. By this feature we can build a smart nano-robots, like a DNA bot that can explore, Identify and React to state of environments; or can carry therapeutic cargo and delivery to the target cell without any mistake; or Control the activity of a cell to the extent even damage it!

Create nano robots by DNA origami; DNA origami is a method & the nanoscale folding of DNA to create arbitrary two and three dimensional shapes at the nanoscale; by taken long single strands of DNA & combined them with hundreds short strands.
The strands are placed together with complementary connection between the organic base & form Phosphodiesterase bond between them. By this method we can made a tiny DNA robot that can seek out and destroy specific cells - including cancer cells.
This nano robot is barrel-shap with 35 nanometer diameters & has two short nucleotide strands – named latches – for identify cell surface proteins, including disease markers. In each one there are 12 linker regions to connect the drug cargo.
When the latches recognize the target cells – like canser cells– change their shapes and then open barrel for delivery cargo drug.
The DNA nanorobot is just as specific, as different hinges and molecular messages can be switched in and out. This means it could potentially be used to treat a variety of diseases.

No risk for healthy cells? Nanorobots can be programmed to release their payload only when the target cell is in the correct disease state; also if stay the nano-robot in blood circulation, the liver clears them or destroyed by nucleases enzymes.
Despite these capabilities, the risk of damage into healthy cells is very low, near zero!
But we know any smart system is not safely utterly; Especially If a self-assembled!
However, since the DNA bot that carry Therapeutic cargo; Unforeseen any danger, It is not too far from imagine when this DNA nano robot act anti healthy cells; like body's immune system attacks Own cells, for example Multiple sclerosis (MS).
||




 Description of Robotic DNA;
|| The obvious importance of DNA in understanding the molecular details of both heredity and development was not until after the publication of the proposed double helical structure that DNA started increasingly to occupy the interest of biologists and finally became the focus of the study of genetics and development. The last fifty years have seen the reorganization of most of biology around DNA as the central molecule of heredity, development, cell function and evolution.
An entire theory has been based on DNA which contains the Secret of Life, the Master Molecule, the Holy Grail of biology, narrative in which we are lumbering robots created, body and mind by our DNA. This theory has implications; not only for our understanding of biology, but for our attempts to manipulate and control biological processes in the interests of human health and welfare, and for the situation of the rest of the living world.
The other side of the movement of DNA to the center of attention in biology has been the development of tools for the automated reading of DNA sequences, for the laboratory replication and alteration of DNA sequences and for the insertion of pieces of DNA into an organism’s genome. Taken together, these techniques provide the power to manipulate an organism’s DNA to order. The three obvious implications of this power are in the detection and possible treatment of diseases, the use of organisms as productive machines for the manufacture of specific biological molecules and the breeding of agricultural species with novel properties.
As in all other species, for any given gene, human mutations with deleterious effects almost always occur in low frequency. Hence specific genetic diseases are rare. Even in the aggregate, genes do not account for most of human ill health. Given the cost and expenditure of energy that would be required to locate, diagnose and genetically repair any single disease, there is no realistic prospect of such genetic fixes as a general approach for this class of diseases. There are exceptions, such as sickle cell anemia and conditions associated with other abnormal hemoglobin, in which a non negligible fraction of a population may be affected, so that these might be considered as candidates for gene therapy. But for most diseases, that represent a substantial fraction of ill health and for which some evidence of genetic influence has been found, the relation between disease and DNA is much more complex and ambiguous.
Scientists have created microscopic robots out of DNA molecules that can walk, turn and even create tiny products of their own on a nano-scale assembly line.
Robots of the future could operate at the nano-scale level, cleaning arteries or building computer components, the nano-spider moves along a track comprising stitched-together strands of DNA that is essentially a pre-programmed course.
Using the DNA robotic origami method (complex 3-D shapes and objects are constructed by folding strands of DNA), the scientist created a nanosize robot in the form of an open barrel whose two halves are connected by a hinge.
The nanorobot’s DNA barrel acts as a container that can hold various types of contents, including specific molecules with encoded instructions that can interact with specific signaling receptors on cell surfaces, including disease markers.
The barrel is normally held shut by special DNA latches. But when the latches find their targets, they reconfigure, causing the two halves of the barrel to swing open and expose its contents, or payload.
The researchers used this system to deliver instructions, encoded in antibody fragments, to two different types of cancer cells leukemia and lymphoma. In each case, the message to the cell was to activate the apoptosis or suicide switch which allows aging or abnormal cells to be eliminated. This programmable nanotherapeutic approach was modeled on the body’s own immune system, in which white blood cells patrol the bloodstream for any signs of trouble.
Because DNA is a natural biocompatible and biodegradable material, DNA nanotechnology is widely recognized for its potential as a delivery mechanism for drugs and molecular signals. There have been significant challenges to its implementation, such as what type of structure to create; how to open, close, and reopen that structure to insert, transport, and deliver a payload; and how to program this type of nanoscale robot.
DNA consists of a string of four nucleotide bases known as A, T, G and C, which make the molecule easy to program. According to nature's rules, A binds only with T, and G only with C. With DNA, at the small scale, you can program these sequences to self-assemble and fold into a very specific final structure, with separate strands brought together to make larger-scale objects. The DNA design strategy is based on the idea of getting a long strand of DNA to fold in two dimensions, as if laid on a flat surface, scientist used a viral genome consisting in approximately 8000 nucleotides to create 2-D stars.That single strand of DNA serves as a scaffold for the rest of the structure. Hundreds of shorter strands, each about 20 to 40 base in length, combine with the scaffold to hold it in its final, folded shape.
DNA is in many ways better suited to self-assembly than proteins, whose physical properties are both difficult to control and sensitive to their environment.
What also has been added is a new software program interface with a software program called "DNAno" which allows users to manually create scaffold DNA origami from a two-dimensional lay out. The new program takes 2D blueprint and predict the ultimate 3D shapes of the designe, also should allow DNA origami designers to more thoroughly test their DNA structures and tweak them to fold correctly. At the molecular-level, stress in the double helix of DNA decreases the folding stability of the structure and introduces local defects, both of which have hampered progress in the scaffold DNA origami field.
Now once we have assembled the DNA structures, the next question is what to do with them, the researchers get excited about the DNA carrier that can transport drugs to specific destinations in the body.
Another possible application of scaffold DNA origami could help reproduce part of the light-harvesting apparatus of photosynthetic plant cells. Researchers hope to recreate that complex series of about 20 protein subunits; but to do that, components must be held together in specific positions and orientations.
First, the general region of neurons associated with the movement of a particular body part or sensory function needs to be identified. Then, a means to decode these signals and translate them to a device that will mimic the movement or function and continue to correctly do so in the long-term needs to be determined. General algorithms or mathematical equations have been created to translate these brain signals that can predict the trajectory of the movement. But, then artificial devices need to be created. These devices have to be able to process and store the signals like a mini-mini-computer. So far, neurochips, little microchips used in the brain, have been created, but have yet to be as efficient and reliable as needed.
 It is necessary to observe how larger populations of neurons interact and behave during motor movements in order to get a better idea of how the brain works. This newer technique also has the benefit of monitoring populations of neurons for longer periods of times.
This simply can explain the estimated number of neurons from the different part of the brain that would be needed to obtain a correlation coefficient of 0.90. PMd(camera) would need the least amount (approximately 480) neurons to obtain a 90% correlation between neuron and robotic arm movement while iMI would need the most (1,195 neurons) to obtain a correlation coefficient of 0.90. All neurons together would require approximately 500 neurons to obtain the 0.90 coefficient.
For now, there are too many questions in terms of neurobiological functions, however, the significant progress is being made in the field and it is not unreasonable to expect some fruition of this technology in the future.
So concluding retrospectively, robotic technology should be important to everyone. Not only can it replace what has been lost, but it can also greatly enhance the lives of everyone. Again, the hybrid part of robotics means that people and machines work in unison. Our uncanny ability to learn combined with our own circuit board, the brain, can lead to the control of any complex machine by the use of trained neurons. After all, isn’t technology our way of building on what nature gave us? most of the training however will be left for us humans to do, as we will become more and more the weakest link. 
||



Completed it by explanation of How Humans are Robots?!

We are Quantum Computer;
http://physicsism.blogspot.com/2012/04/we-are-quantum-computer.html





| Innovation - OpenSource - Technology |

 Open Source Technology

Description of Open Source:
|| Nowadays, the patent wars have heated up further & tech companies conflict on breaching the patents.
The conflicts between Apple, Samsung, HTC, Motorola, Google, Nokia, Microsoft …..
Such these behaviors are against tech developments & monopolist is reducer for science & tech progresses but here only valuable thing is more profit.
These types of strict rules that makes a monopoly for producer Reduce competition between companies.
But Reducing restrictions of copyright & patents can makes a Competitive environment for all producers & developers & increases the Speed of Scientific Advancement. It’s the best solution for accelerate to tech progresses.
 It’s the Open Source…
In production and development, open source is a pragmatic methodology that promotes free redistribution and access to results of produces and researches also information resources in different subjects.
Nowadays, “open source” phrase almost used in subjects related to computers and software but public sharing of information doesn’t limited to time and subject causes the concept of free sharing of technological information existed long before computers. For example, cooking recipes have been shared since the beginning of human culture!
XLeader of open source products in Software (OS) is the Unix-like operating systemX.
Many success cases have made and developed quickly based on open source code, for example: Unix, Linux, Android, Ubuntu, Open-Indiana, FreeBSD, Chromium OS, Mac OS X, Firefox...  are same family and derived from together.
One of the largest achievements of open standards is Internet. Researchers can access to Advanced Researches Projects Agency Network (ARPANET) used a process called Request for Comments to develop telecommunication network protocols. This collaborative process of the 1960s led to the birth of the Internet in 1969.
Open source gained hold with the rise of the Internet, and the attendant need for massive retooling of the computing source codes. Opening the source codes enabled a self-enhancing diversity of production models, communication paths, and interactive communities
Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software.
Most economists agree that open-source candidates have an information good aspect! In general, this suggests that the original work involves a great deal of time, money, and effort. However, the cost of reproducing the work is very low, so that additional users may be added at zero or near zero cost – this is referred to as the marginal cost of a product. Copyright creates a monopoly so the price charged to consumers can be significantly higher than the marginal cost of production. This allows the producer to recoup the cost of making the original work, without needing to find a single customer that can bear the entire cost. Conventional copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost.
Being organized effectively as a consumers' cooperative, the idea of open source is to reduce the access costs of the consumer and the creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works.
However, others argue that because consumers do not pay for the copies, creators are unable to recoup the initial cost of production, and thus have no economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available at all. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary.
A report by the coup Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers.
||


We must try to improve our technology and it means after money companies should work together to invent new tech and develop more devices. But companies just think about money and exclusive tech.
However if they work together it will have money for all of them. But they are fighting!
-          How can you invent?
Maybe we can check it by two ways;
Humans need to some device then create it.
Humans observe new creative device and then invent new device with better performance.
So it s not scientific thought to ban each others because of new innovations!
If there s just one different parameter in new devices.





| Quantum Computer - Fuzzy Logic - Information |

Zero-One is not enough

Description of Quantum Computer;
|| The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers. This superposition of qubits is what gives quantum computers their inherent parallelism. T0his parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices: Ion traps use optical or magnetic fields (or a combination of both) to trap ions, Optical traps use light waves to trap and control particles;
Quantum dots are made of semiconductor material and are used to contain and manipulate electrons; Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material and Superconducting circuits allowing electrons to flow with almost no resistance at very low temperatures.
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers.
To date, the two most promising uses for such a device are quantum search and quantum factoring. To understand the power of a quantum search, consider classically searching a phonebook for the name which matches a particular phone number. If the phonebook has 10,000 entries, on average you'll need to look through about half of them 5,000 entries before you get lucky. A quantum search algorithm only needs to guess 100 times. With 5,000 guesses a quantum computer could search through a phonebook with 25 million names.
Although quantum search is impressive, quantum factoring algorithms pose a legitimate, considerable threat to security. This is because the most common form of Internet security, public key cryptography, relies on certain math problems (like factoring numbers that are hundreds of digits long) being effectively impossible to solve. Quantum algorithms can perform this task exponentially faster than the best known classical strategies, rendering some forms of modern cryptography powerless to stop a quantum code-breaker.
Bits, either classical or quantum, are the simplest possible units of information. They are oracle-like objects that, when asked a question (i.e. when measured), can respond in one of only two ways. Measuring a bit, either classical or quantum, will result in one of two possible outcomes. At first glance, this makes it sound like there is no difference between bits and qubits. In fact, the difference is not in the possible answers, but in the possible questions. For normal bits, only a single measurement is permitted, meaning that only a single question can be asked: Is this bit a zero or a one? In contrast, a qubit is a system which can be asked many, many different questions, but to each question, only one of two answers can be given.
The way in which a one-qubit quantum computer is supposed to work, what happens when things go wrong? For a classical bit, the only thing that can go wrong is for a bit to unexpectedly flip from zero to one or one to zero. The same type of thing could happen to qubits, in the form of unexpected or unwanted rotations. But there's another type of process, one that researchers in quantum computing are constantly fighting to eliminate: decoherence. Decoherence happens when something outside of the quantum computer performs a measurement on a qubit, the result of which we never learn.
Pairs of qubits are much, much more than the sum of their parts.
Classical bits only become marginally more interesting when paired—it literally only makes the difference between counting to two and counting to four. Pairs of quantum bits, on the other hand, can be used to create entanglement. This phenomenon became one of the most controversial arguments in 20th century physics. It revolved around whether it could exist at all.
Not only can a single qubit take on a whole sphere full of values, it can only be measured along a single axis at a time. Not only that, but measuring, changes its state from whatever it was before the measurement to whatever states the measurement produced. That's a problem. In fact, it can be proven that even in principle it's not possible to copy an unknown qubit's state.
Consider the "singlet state," an example of an entangled two-qubit state. A singlet state has two defining characteristics:
Any single-qubit measurement performed on one half of the
singlet state will give a totally random result.
Any time the same single-qubit measurement is performed on
Both qubits in a singlet state, the two measurements will give opposite results.
To explain the characteristic we can say imagine if someone showed you a pair of coins, claiming that when both were flipped at the same time, one would always come up heads and one would always come up tails, but that which was which would be totally random. What if they claimed that this trick would work instantly, even if the coins were on opposite sides of the Universe. Yet time and time again, experiment after experiment, the results show that something about local realism must be wrong. Either the events simply cannot be predicted, even in principle, or there is something fundamentally nonlocal about entanglement—an ever-present bond between entangled particles which persists across any distance.
To give you an idea, consider that single-qubit states can be represented by a point inside a sphere in 3-dimensional space. Two qubit states, in comparison, need to be represented as a point in 15 dimensional space.
It's no wonder, therefore, that quantum physicists talk about a 100-qubit quantum computer like it's the holy grail. It's simply much too complicated for us to simulate using even the largest conceivable classical computers.
If we want to measure the polarization of a photon, So I put it through a polarizer. What that polarizer actually does is couple a polarization qubit to a spatial qubit, resulting in a superposition of two possible realities.That superposition is an entangled state. Using a different polarizer, it would be straightforward to unentangle it without ever making a measurement, effectively erasing the fact that the first measurement ever happened at all. Instead, a photodetector is placed in the path of the transmitted half of the entangled state. If there is a photon there, it will excite an electron. That excited electron will cause an electron avalanche, which will cause a current to surge in a wire, which will be sent to a classical computer, which will change the data in that computer's RAM, which will then finally be viewed by you.
That equation means every part of the experiment, even the experimenter, are all part of a single quantum superposition. Naturally, you might imagine that at some point, something breaks the superposition, sending the state irreversibly down one path or the other. The problem is that every time we've followed the chain of larger and larger entangled states, they always appear to be in a superposition, in this psuedo-magical state where any set of axes are equally valid, and every operation is reversible.
 Maybe, at some point, it all gets too big, and new physics happens. In other words, something beyond quantum mechanics stops the chain of larger and larger entangled states, and this new physics gives rise to our largely classical world. Many physicists think that this happens, many physicists, think it doesn't, and instead imagine the universe as an unfathomably complex, inescapably beautiful symphony of possibilities, each superposed reality endlessly pulsing in time to its own energy.
-

In 20 years we can combined all our communication system cell phones , computers , TV, radio and internet into chips on a thin headband that transmit information between the internet and our brain and also to others headband. That connection can lead us to have network enabled telepathy we will communicate directly to another person headband in the other part of the world, using just our thoughts.
Recognizing thoughts instead of voice speak can be seen as difficult but with training thought-talking could become easy and routine.
Your computer driven auto-drive electric car rolls its top down on this warm day. You manually drive to the electronic roadway on-ramp and relinquish the wheel. Your headband selects a video to enjoy on the way to the airport where your smart car drops you off at the terminal, then auto-parks itself. An intelligent cam scans your mind and quickly approves you; no waiting for ticket-check or security. While boarding the plane, you see a familiar face. Your headband immediately flashes his identity data and displays it on your eyes.  Our headband enables us to speak or think of any question and get an immediate answer.
Considering this, will help a lot of people, because the necessity to learn languages for example would disappear, and the headbands will be available for everyone.
We can say quantum computers will greatly improve relationship, no more forgetting names and details plus increasing intimacy generated by communicating by thoughts could bring people around the world closer together.
With our headbands we will speak or think any question and get an immediate answer,
We still have some significant research and development ahead of us as we currently still are confronted with an unacceptably large amount of data to be processed simultaneously due to the lack of data present in the processor at the moment of calculation.
Even then, the answer obtained must be the most certain to the dimensional time where had been done the question.
||
        
           (we will complete it by Introducing Operating System on Quantum Computer;
                software ability of QuantumComputers & etc.)








| OS - Computer - Information |

What is Operating System?

Description of Operating System;
|| A mobile operating system (mobile OS) is the system that controls a smart-phone, tablet, PDA, or other mobile device. Modern mobile operating systems combine the features of a personal computer operating systems with touch-screen, cellular, Bluetooth, WiFi, GPS mobile navigation, camera, video camera, speech recognition, voice recorder, music player, near field communication, personal digital assistant (PDA), and other features.
A smart device is an electronic device that is cordless (unless while being charged), mobile (easily transportable), always connected (via WiFi, 3G, 4G etc.) and is capable of voice and video communication, internet browsing, geo-location (for search purposes) and that can operate to some extent autonomously. It is widely believed that these types of devices will outnumber any other forms of smart computing and communication in a very short time.
Smart Phones use many different operating systems. 
 The evolution of smart devices has accelerated exponentially since the beginning of this century and this trend continues to grow in significance. Mobility has already become an essential part of our daily lives and the future will certainly bring natural interface between humans and the smart devices that will surrounded them in every environment. Behind this phenomenon is the general desire to not only have easy access to information, but to share that information, to pay for purchases, to access entertainment, to seek products, to buy them and more and all this just by pushing the buttons on a single, handheld device. There is a strong relationship between technology and equipment; they are developing together, side by side, and getting smarter and smarter. These increasingly smarter devices are our interface with a world of technology content, applications and services; they let us interact with technology and reap its benefits. Now, we are looking forward to the general availability of smart communications machine to machine between smart devices.
Technology has squeezed the functions of  four different devices and merged them into one This process of shrinkage is putting a world of functions, and the world itself, in the pockets of more and more people each day. The escalating demand for shrinking devices that combine portability and functionality is pushing the growth of advanced semiconductor manufacturing. Equipment manufacturers are doing extensive research in the field of surface mount device (SMD) manufacturing technologies. SMDs facilitate quick and inexpensive manufacturing of electronic equipment; they are a significant source of competitive advantage in sectors such as consumer electronics, automotive, education, healthcare and other industries as well.
The demand is increasing for a wide variety of micro-devices, so vendors are hard-pressed to furnish everything required to fill global distribution and supply chains.  European Platform for Micro & Nanomanufacturing Shrinking equipment is playing a pivotal role in social and economic development; it broadens access to healthcare, education and other essential social services by providing the platforms that enable organizations to meet community needs.
As ICT technology advances, we see growing connectivity among smart devices computers, mobile phones and even televisions. With the widespread penetration of mobile phones and other handheld devices that connect to the Internet, nearly 4 billion people worldwide have some level of access to computing. Coupled with powerful and feature rich software applications these smart devices are helping to bridge rural urban digital divide. The convergence of device connectivity and software innovation is enabling a greater number of people and organizations around the world to access information and to communicate and collaborate in more powerful ways.
Large emerging markets such as China and India are exploring the potential of smart devices to improve healthcare services. These countries are generating tremendous demand for affordable and reliable smart medical devices to improve the treatment and care of millions of patients. Today, medical device designers are devising new equipment to enhance their diagnostic, monitoring, and treatment capabilities.
They are putting the capabilities of clinical devices into portable units the size of a cell phone. Healthcare sector equipment shrinkage now let healthcare workers carry tools, which once required huge machine installations in hospitals, to provide sophisticated services in remote areas.
The education sector is also readily adopting and and utilizing small, handy, teaching-aids and gadgets. The production of low cost, small, laptops has greatly changed the paradigm of ICT-enabled education, especially in the developing world.
The consumer, though, is more concerned with the issues of reliability, power consumption, security, privacy and safety associated with smart devices. The shrinkage of equipment has changed the world we live in today the way we communicate, network and interact with others and will continue to do so for the foreseeable future.
Many studies had been done to clarify or understand the theoretical relationships between system design for mobile computing, human behaviors, social attributions, and interaction outcome. As conclusions we can say; doubt that our inevitable future is to become a machinelike collective society. How devices are used is not determined by their creators alone. Individuals influence how devices are used, and humans can be tenaciously social creatures. Given the importance of social relationships in our lives, we may adopt only those devices that support, rather than inhibit, such relationships.
With the substantial amount of skepticism related to technology, such findings seem to counterbalance the immediate threat that a thoroughly computerized future appears to hold. However, apart from personal prejudices, the wide range of social consequences that pervasive computing may have will certainly need to be addressed in future systems and debates.
-

There will be 1000 devices per person within a decade? Our surroundings at the office, home and in public places will be almost polluted with embedded electronics. If they are smart enough, the devices will be able to find each other and share information and tasks within a local smart space. A smart space can operate on a stand-alone basis, sometimes even without contacting Internet services. In order to function, smart spaces require a common communication language and semantics between the devices.
From the end point of the vision, we see a ubiquitous world in which it may be difficult to tell the difference between real and virtual objects. People will attend meetings at which participants are spread around the world, with some showing themselves as digital holographic avatar images. The same meeting will be held simultaneously in a virtual life, e.g., in a Second Life, so it will be the people’s own choice how they participate. There will be holographic objects in rooms showing news, information and infotainment.
Communication between people has broadened with the aid of the Internet and other digital communication methods. Social media have become part of people’s everyday lives. People’s voices can be heard more broadly and equally than ever before.
The number of users of social media services continues to grow. For example, Facebook has more than 500 million users, of which 150 million use the service with a smart device. The mobile dimension links users live to the services much more efficiently than over a computer.
Part of the teaching in schools and universities is carried out with video or web courses. Laptops and smart devices are used by students for their personal use but are not integrated into the teacher’s material or teaching not yet, only few are using it.
IT systems are not generally interoperable. For example, heavy integration is needed to pass patient information between healthcare units.
 We have more and more meetings with people around the globe over long distances and less time is wasted travelling. Video, web and phone meetings are a good and cost-effective alternative to travelling.
Young people do not remember a time without the Internet and smart devices. They are ‘connected to the Internet’ from birth. They meet their friends more and more on the net more than in real life.
The interactive media devices have had a change tendency towards psychological and social aspects of human behavior, leading to a more voluntary sharing of personal and private information.
 A tendency to spend an increasing amount of time on the Internet, which is time away from that used to read, play and exercise, etc.
These changes also affect sectors of services like the postal service and media information outlets. There are fewer letters and newspapers to deliver to homes due to the Internet. Less paper bills are sent, instead bills and shopping are paid for directly on the Internet creating an ever growing virtual world. People often read newspapers on line nowadays. The big advantage of this services is not only the speed at which the information is transferred but also the interactive aspect has increase much value as opposed to the one way stream of information provided previously by the different services.
One of the disadvantages is the identification and the security aspect, which are exposed to an almost organic flow of viruses, resulting in persons having different user names and accounts for home, work and hobbies. No one can remember all the passwords and PINs anymore without yellow post-it notes. Smart device and smart card technology provide one practical solution to this challenge.
We will carry a smart device with us at all times which gives us essential information and connectivity to the internet that we can no longer live without. The only thing people would take to a desert island would be their personal device.
In healthcare, it will become the norm to contact doctors and nurses over the Internet, for example, taking a Skype video call to a local hospital in the case of illness.
In a near future we can predict
Schools will disappear as teaching will be carried out everywhere the students are. Schools will not be able to keep up with the hard-paced development of the digital world. Lectures will be consumed at home or on the move.
It will not be considered needed, trendy or righteous to travel. Real life telepresence will be used instead of travel. At first, this will be based on holographic technologies, but later brain implants will give a very realistic feeling of presence anywhere in the world making the basis of needed information to any local presence available. In opposite sense much education and training skills can be provided trough out the world. i.e. under-developed countries, hard accessible places: Jungle, polar regions, remote locations, etc. 
Embedded electronics will be the main source of new electronics in our lives.
Devices will have common ‘languages’ to exchange information and understand each other at a semantic level.
Products will be smart and able to store and present information through their lifecycles. For example, a car will carry information from the factory (who made it, which parts were integrated), on transport (any scratches or drops during the cargo), on a shop (who test drove it), on use (owners, services, faults) and, in the end, which parts can be recycled.  Energy will continue to be a scarce resource, and the production, transfer, storage and use will be optimized with a smart grid and energy harvesting technologies.  Smart devices will provide us with digital sense and means to interact with virtual worlds. Ever-present and fluent connectivity to the Internet will become so self-evident that the providing technologies and wireless networks disappear from the users’ knowledge. Similarly, the cloud will be hidden from the users. Data, applications and services will be there somewhere.
 ||
    
- If some people live in 21century & can buy it; but don’t use smart-phones they are not human!
Cause humanity is depend on Technology, Cause we are Q.Computer.



Also Humans have Operating System.... Read more.




| Gravity - Information - ElectroMagnetic |

What is the Gravity?

Description of Gravity;
|| A few centuries B.C. the Greeks described the first realistic model of the universe. The Earth in the center of it, and a sphere in which the stars where fixed on, was somewhere on the outside realm. This notion explained why the stars appear in the same places each year. The sun and the moon were also moving in circles around the earth. However, there where objects in the sky that seem to wander around without any predictable type of motion; they named these objects planets, from the Greek word for wanderer. Aristotle and Plato put the planets perfectly circling the earth, and that all objects were falling onto the earth. Although this picture nowadays seems ridiculous, one must understand the context of these times: people thought that nature must be absolutely symmetric, perfect*. The only types of motion that are perfect were the straight line and the circle. Also, earth was considered the heaviest of all 4 natural elements (earth, water, fire, air). Therefore the Earth was placed in the center of the universe so that all objects will fall towards it in straight lines. The only objects that couldn't fall in straight lines, the planets and the stars, had to move in circles around the earth.
Copernicus was born in 1473 and was greatly interested in sky observations, becoming one of the most well known astronomers of his time. At the age of 41, he gave to his friends an anonymous manuscript in which he claimed that if instead of the earth, strictly as a mathematical convenience,  the sun was placed in the center of the universe, then Ptolemy's system of epicycles could be greatly simplified. Of course he was afraid of letting his idea out because he would face extreme consequences.
Tycho Brahe, Brahe did not embrace the Copernician view of the universe; he invented one of his own, by placing the earth again at the center of the universe. However the sun now circled the earth, and all the planets circled the sun.
Johannes Kepler , Kepler's theory along with Brahe's data made astronomy 100 times more accurate than ever before. All I want to point out is the fact that in order for a minor experimental fact to be fitted into theory, a huge reconsideration of our understanding of the world must be introduced. That was the first time it ever happened, but it has occurred numerous times since then in the history of science.
Galileo Galilei, introduced for the first time the very essence of physics: experiments should be done (like the ancient Greeks did) but they should be tested with the theory using mathematical language. He created inclined planes with different inclination and let balls to roll on them, measuring the time it takes to go down the slope. He quickly realized that no matter what the slope was, gentler or steeper, given twice the time a ball would travel four times the distance. At that point he made his first huge leap of imagination: that this must be true in the limiting case, even if the plane was completely vertical, so it must be true for free falling bodies too. He managed to work out the mathematics of his experiments and figured that it meant uniform acceleration. Then as a second giant leap on imagination, he thought of bodies falling in the vacuum, an idea unthinkable at that point, for nothing to exist by itself He broke free of the bonds of the Aristotelian thought and stated that the only reason that makes bodies fall at different speeds was the existence of air; if one were to do the experiments in the vacuum all bodies should fall in the same way.
Isaac Newton, he found the laws that explained the previous discoveries of Kepler and Galileo (the laws were actually suspected, but he was the one who proved this was the indeed case). His inverse square law explained the ellipses that the planets make as Kepler realized, and his F=ma law (where F he put the newly founded gravity) explained the motion of the objects on the earth that Galileo had discovered. Second, he generalized his theory: he realized that the same laws that apply here on earth must also apply to the movement of the celestial bodies, and this is how he managed to explain the tides: as an earthly effect of celestial bodies. He made in that way the first great unification, the first out of many that were destined to come: laws that could explain both earthly and celestial phenomena.
It took Albert Einstein up to 10 years to find an answer, He concluded then that mass and energy must affect the gravitational field. Secondly, he stated that the gravitational field is not actually a force as Newton had described, but instead a curvature in space. To put it in simple words, the bodies are affected by gravity not because of a force directly exerted on them but because space is curved and therefore they have to follow space's grid. The presence of mass or energy does not affect the bodies directly; it affects the space first, and then the bodies move in this curved space. Maybe it s difficult to understand but as an analogy we can say being in a sea (or pool) using your finger to create ripples on the water. The presence of your finger in the water creates these waves and alters the geometry of water, it's not flat anymore. If you look through the rippled water you'll see the bottom distorted.
 This idea, that space can be bended by mass, has breathtaking ramifications. Since mass curves space and then the objects just move on this space, there is no reason why for example light couldn't follow space's curvature. Indeed, one of Einstein's first ideas was that light should be able to bend too, when massive objects exist close to it.
After several experiments, experimental teams leaded by Sir Arthur Eddington set off to measure the light deflection at solar eclipse, at 1919. By that time Einstein had figured out the correct equations of his general relativity theory, and found the exact amount of bending. At the crucial day of the solar eclipse the two teams collected the data and then compared them to Einstein's theoretical predictions; the matching was superb for the accuracy they had at the time.
Einstein's theory of gravity has never been disproved until now (2004). Soon after its completion, the theory of quantum mechanics was developed; a description of the world in very small scales. However, general relativity seems to be incompatible with quantum mechanics and breaks down (theoretically). In most of the cases, gravity is so weak that in such small scales it is ignored. However in the interior of a black hole, the huge amount of mass is not negligible. Also, this is the case at the early stages of the universe: ultra-condensed matter, lots of mass suppressed into quantum distances. In these cases, a quantum treatment of gravity will be needed, although there is no way right now to test how exactly general relativity must be modified.
Nowadays, some physicists talk about the new role that the quantum information plays in gravity sets the scene for a dramatic unification of ideas in physics. Some time ago Erik Verlinde at the the University of Amsterdam put forward one such idea which has taken the world of physics by storm. Verlinde suggested that gravity is merely a manifestation of entropy in the Universe. His idea is based on the second law of thermodynamics that entropy always increases over time. It suggests that differences in entropy between parts of the Universe generate a force that redistributes matter in a way that maximizes entropy. This is the force we call gravity.
But perhaps the most powerful idea to emerge from Verlinde's approach is that gravity is essentially a phenomenon of information.
Today, this idea gets a useful boost from Jae-Weon Lee at Jungwon University in South Korea and a couple of scientist. They use the idea of quantum information to derive a theory of gravity and they do it taking a slightly different tack to Verlinde.
At the heart of their idea is the tricky question of what happens to information when it enters a black hole. Physicists have puzzled over this for decades with little consensus. But one thing they agree on is Landauer's principle, that erasing a bit of quantum information always increases the entropy of the Universe by a certain small amount and requires a specific amount of energy.Jae Weon assume that this erasure process must occur at the black hole horizon. And if so, spacetime must organize itself in a way that maximizes entropy at these horizons. In other words, it generates a gravity-like force.
It also relates gravity to quantum information for the first time. Over recent years many results in quantum mechanics have pointed to the increasingly important role that information appears to play in the Universe.
Some physicists are convinced that the properties of information do not come from the behavior of information carriers such as photons and electrons but the other way round. They think that information itself is the ghostly bedrock on which our universe is built.
Gravity has always been a fly in this ointment. But the growing realization that information plays a fundamental role here too, could open the way to the kind of unification between the quantum mechanics and relativity that physicists have dreamed of.
 ||

*They preferred symmetrical universe many years ago. But Einstein preferred symmetrical universe about 100 years ago and of course it was one of the biggest mistake of Einstein that forced him unable to make unification theory. His love to symmetry closed his eyes to reality so it force him to make one another fancy parameter to make his universe symmetrical; time!
He just put parameter of time to his equations to make symmetrical explanation for universe. But in another group they bigot and radical viewpoint in quantic universe forced them not to understand universe exactly. They just didn’t like symmetrical diamond of Einstein and liked to make asymmetrical woody universe! While if you want to combine Quantum Mechanics with General Relativity it just needs to omit one parameter from GR and one opinion from QM.
Omit time from GR and omit materialism from QM (fundamental element of universe is not tiny ball called particle and not fancy string with 26Dimension!!!).
In this way you have one common, observable & real force; ElectroMagnetic.
That we observe it with different effect in different scale.
Quantic effects in small scale in quantum mechanics normal effect of electricity & etc in classical scale and one important effect in large scale that called gravity.
So there s no independent force named gravity it s just EM in large scale without time without Graviton.

-          To be continue…




| Axiom - Scientific Thought - Physics |

Is there any Axiom?!

Description of Axiom
|| In a nutshell, the logic-deductive method is a system of inference where conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference). Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, if we are talking about mathematics) must be proven with the aid of the basic assumptions.
The logic-deductive method was developed by the ancient Greeks, and has become the core principle of modern mathematics. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's Posterior Analytics is a definitive exposition of the classical view.
At the foundation of the various sciences lay certain basic hypotheses that had to be accepted without proof. Such a hypothesis was termed a postulate. The postulates of each science were different. Their validity had to be established by means of real-world experience. Indeed, Aristotle warns that the content of a science cannot be successfully communicated, if the learner is in doubt about the truth of the postulates.
A great lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. This abstraction, one might even say formalization, makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts.

In structuralist mathematics we go even further, and develop theories and axioms (like field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an axiom and a postulate disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. We get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like line and parallel with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience.
When mathematicians employ the axioms of a field, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields and field theory gives correct knowledge in all contexts. It is not correct to say that the axioms of field theory are propositions that are regarded as true without proof. Rather, the Field Axioms are a set of constraints. If any given system of addition and multiplication tolerates these constraints, then one is in a position to instantly know a great deal of extra information about this system. This way, there is a lot of gain for the mathematician.

Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and logic itself can be regarded as a branch of mathematics. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.
In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axiom. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.
It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here the emergence of Russell's paradox, and similar antinomies of naive set theory raised the possibility that any such system could turn out to be inconsistent.

The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an un-provable assertion within the scope of that theory.
It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at this date we have no way of demonstrating the consistency of modern set theory (Zermelo-Frankel axioms). The axiom of choice, a key hypothesis of this theory, remains a very controversial assumption. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo-Frankel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.
We know that the efforts of mathematicians to recover the apodictic certainty of the axioms as ‘self-evident truths’ during the ‘crisis in the foundations’ failed in its task. The last bastion of rationalism finally fell. All this took place during and after the rise of positivism in the nineteenth century and, in one of history’s ironies, while the ground was giving way beneath the rationalist position, positivism was declaring that mathematics was to be excluded from the sciences because it was not empirical.
Simple axioms are combined in our daily life via de laws of logic without realizing, and so, enabling us to create more complex theorems.
||

Is there any Axiom?
In Science of course there s no axiom unless some simple definition but not something without definition.
We must prove all phenomena (natural phenomena or beliefs).
But think that our science is base on some unknown Axioms that we don't why we accept them and how & why we use them! so we have many paradoxes and un-solved problem in science.
To read more see;

-   -       To be continue…