Search This Blog

| Innovation - OpenSource - Technology |

 Open Source Technology

Description of Open Source:
|| Nowadays, the patent wars have heated up further & tech companies conflict on breaching the patents.
The conflicts between Apple, Samsung, HTC, Motorola, Google, Nokia, Microsoft …..
Such these behaviors are against tech developments & monopolist is reducer for science & tech progresses but here only valuable thing is more profit.
These types of strict rules that makes a monopoly for producer Reduce competition between companies.
But Reducing restrictions of copyright & patents can makes a Competitive environment for all producers & developers & increases the Speed of Scientific Advancement. It’s the best solution for accelerate to tech progresses.
 It’s the Open Source…
In production and development, open source is a pragmatic methodology that promotes free redistribution and access to results of produces and researches also information resources in different subjects.
Nowadays, “open source” phrase almost used in subjects related to computers and software but public sharing of information doesn’t limited to time and subject causes the concept of free sharing of technological information existed long before computers. For example, cooking recipes have been shared since the beginning of human culture!
XLeader of open source products in Software (OS) is the Unix-like operating systemX.
Many success cases have made and developed quickly based on open source code, for example: Unix, Linux, Android, Ubuntu, Open-Indiana, FreeBSD, Chromium OS, Mac OS X, Firefox...  are same family and derived from together.
One of the largest achievements of open standards is Internet. Researchers can access to Advanced Researches Projects Agency Network (ARPANET) used a process called Request for Comments to develop telecommunication network protocols. This collaborative process of the 1960s led to the birth of the Internet in 1969.
Open source gained hold with the rise of the Internet, and the attendant need for massive retooling of the computing source codes. Opening the source codes enabled a self-enhancing diversity of production models, communication paths, and interactive communities
Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software.
Most economists agree that open-source candidates have an information good aspect! In general, this suggests that the original work involves a great deal of time, money, and effort. However, the cost of reproducing the work is very low, so that additional users may be added at zero or near zero cost – this is referred to as the marginal cost of a product. Copyright creates a monopoly so the price charged to consumers can be significantly higher than the marginal cost of production. This allows the producer to recoup the cost of making the original work, without needing to find a single customer that can bear the entire cost. Conventional copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost.
Being organized effectively as a consumers' cooperative, the idea of open source is to reduce the access costs of the consumer and the creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works.
However, others argue that because consumers do not pay for the copies, creators are unable to recoup the initial cost of production, and thus have no economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available at all. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary.
A report by the coup Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers.
||


We must try to improve our technology and it means after money companies should work together to invent new tech and develop more devices. But companies just think about money and exclusive tech.
However if they work together it will have money for all of them. But they are fighting!
-          How can you invent?
Maybe we can check it by two ways;
Humans need to some device then create it.
Humans observe new creative device and then invent new device with better performance.
So it s not scientific thought to ban each others because of new innovations!
If there s just one different parameter in new devices.





| Quantum Computer - Fuzzy Logic - Information |

Zero-One is not enough

Description of Quantum Computer;
|| The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers. This superposition of qubits is what gives quantum computers their inherent parallelism. T0his parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices: Ion traps use optical or magnetic fields (or a combination of both) to trap ions, Optical traps use light waves to trap and control particles;
Quantum dots are made of semiconductor material and are used to contain and manipulate electrons; Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material and Superconducting circuits allowing electrons to flow with almost no resistance at very low temperatures.
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers.
To date, the two most promising uses for such a device are quantum search and quantum factoring. To understand the power of a quantum search, consider classically searching a phonebook for the name which matches a particular phone number. If the phonebook has 10,000 entries, on average you'll need to look through about half of them 5,000 entries before you get lucky. A quantum search algorithm only needs to guess 100 times. With 5,000 guesses a quantum computer could search through a phonebook with 25 million names.
Although quantum search is impressive, quantum factoring algorithms pose a legitimate, considerable threat to security. This is because the most common form of Internet security, public key cryptography, relies on certain math problems (like factoring numbers that are hundreds of digits long) being effectively impossible to solve. Quantum algorithms can perform this task exponentially faster than the best known classical strategies, rendering some forms of modern cryptography powerless to stop a quantum code-breaker.
Bits, either classical or quantum, are the simplest possible units of information. They are oracle-like objects that, when asked a question (i.e. when measured), can respond in one of only two ways. Measuring a bit, either classical or quantum, will result in one of two possible outcomes. At first glance, this makes it sound like there is no difference between bits and qubits. In fact, the difference is not in the possible answers, but in the possible questions. For normal bits, only a single measurement is permitted, meaning that only a single question can be asked: Is this bit a zero or a one? In contrast, a qubit is a system which can be asked many, many different questions, but to each question, only one of two answers can be given.
The way in which a one-qubit quantum computer is supposed to work, what happens when things go wrong? For a classical bit, the only thing that can go wrong is for a bit to unexpectedly flip from zero to one or one to zero. The same type of thing could happen to qubits, in the form of unexpected or unwanted rotations. But there's another type of process, one that researchers in quantum computing are constantly fighting to eliminate: decoherence. Decoherence happens when something outside of the quantum computer performs a measurement on a qubit, the result of which we never learn.
Pairs of qubits are much, much more than the sum of their parts.
Classical bits only become marginally more interesting when paired—it literally only makes the difference between counting to two and counting to four. Pairs of quantum bits, on the other hand, can be used to create entanglement. This phenomenon became one of the most controversial arguments in 20th century physics. It revolved around whether it could exist at all.
Not only can a single qubit take on a whole sphere full of values, it can only be measured along a single axis at a time. Not only that, but measuring, changes its state from whatever it was before the measurement to whatever states the measurement produced. That's a problem. In fact, it can be proven that even in principle it's not possible to copy an unknown qubit's state.
Consider the "singlet state," an example of an entangled two-qubit state. A singlet state has two defining characteristics:
Any single-qubit measurement performed on one half of the
singlet state will give a totally random result.
Any time the same single-qubit measurement is performed on
Both qubits in a singlet state, the two measurements will give opposite results.
To explain the characteristic we can say imagine if someone showed you a pair of coins, claiming that when both were flipped at the same time, one would always come up heads and one would always come up tails, but that which was which would be totally random. What if they claimed that this trick would work instantly, even if the coins were on opposite sides of the Universe. Yet time and time again, experiment after experiment, the results show that something about local realism must be wrong. Either the events simply cannot be predicted, even in principle, or there is something fundamentally nonlocal about entanglement—an ever-present bond between entangled particles which persists across any distance.
To give you an idea, consider that single-qubit states can be represented by a point inside a sphere in 3-dimensional space. Two qubit states, in comparison, need to be represented as a point in 15 dimensional space.
It's no wonder, therefore, that quantum physicists talk about a 100-qubit quantum computer like it's the holy grail. It's simply much too complicated for us to simulate using even the largest conceivable classical computers.
If we want to measure the polarization of a photon, So I put it through a polarizer. What that polarizer actually does is couple a polarization qubit to a spatial qubit, resulting in a superposition of two possible realities.That superposition is an entangled state. Using a different polarizer, it would be straightforward to unentangle it without ever making a measurement, effectively erasing the fact that the first measurement ever happened at all. Instead, a photodetector is placed in the path of the transmitted half of the entangled state. If there is a photon there, it will excite an electron. That excited electron will cause an electron avalanche, which will cause a current to surge in a wire, which will be sent to a classical computer, which will change the data in that computer's RAM, which will then finally be viewed by you.
That equation means every part of the experiment, even the experimenter, are all part of a single quantum superposition. Naturally, you might imagine that at some point, something breaks the superposition, sending the state irreversibly down one path or the other. The problem is that every time we've followed the chain of larger and larger entangled states, they always appear to be in a superposition, in this psuedo-magical state where any set of axes are equally valid, and every operation is reversible.
 Maybe, at some point, it all gets too big, and new physics happens. In other words, something beyond quantum mechanics stops the chain of larger and larger entangled states, and this new physics gives rise to our largely classical world. Many physicists think that this happens, many physicists, think it doesn't, and instead imagine the universe as an unfathomably complex, inescapably beautiful symphony of possibilities, each superposed reality endlessly pulsing in time to its own energy.
-

In 20 years we can combined all our communication system cell phones , computers , TV, radio and internet into chips on a thin headband that transmit information between the internet and our brain and also to others headband. That connection can lead us to have network enabled telepathy we will communicate directly to another person headband in the other part of the world, using just our thoughts.
Recognizing thoughts instead of voice speak can be seen as difficult but with training thought-talking could become easy and routine.
Your computer driven auto-drive electric car rolls its top down on this warm day. You manually drive to the electronic roadway on-ramp and relinquish the wheel. Your headband selects a video to enjoy on the way to the airport where your smart car drops you off at the terminal, then auto-parks itself. An intelligent cam scans your mind and quickly approves you; no waiting for ticket-check or security. While boarding the plane, you see a familiar face. Your headband immediately flashes his identity data and displays it on your eyes.  Our headband enables us to speak or think of any question and get an immediate answer.
Considering this, will help a lot of people, because the necessity to learn languages for example would disappear, and the headbands will be available for everyone.
We can say quantum computers will greatly improve relationship, no more forgetting names and details plus increasing intimacy generated by communicating by thoughts could bring people around the world closer together.
With our headbands we will speak or think any question and get an immediate answer,
We still have some significant research and development ahead of us as we currently still are confronted with an unacceptably large amount of data to be processed simultaneously due to the lack of data present in the processor at the moment of calculation.
Even then, the answer obtained must be the most certain to the dimensional time where had been done the question.
||
        
           (we will complete it by Introducing Operating System on Quantum Computer;
                software ability of QuantumComputers & etc.)








| OS - Computer - Information |

What is Operating System?

Description of Operating System;
|| A mobile operating system (mobile OS) is the system that controls a smart-phone, tablet, PDA, or other mobile device. Modern mobile operating systems combine the features of a personal computer operating systems with touch-screen, cellular, Bluetooth, WiFi, GPS mobile navigation, camera, video camera, speech recognition, voice recorder, music player, near field communication, personal digital assistant (PDA), and other features.
A smart device is an electronic device that is cordless (unless while being charged), mobile (easily transportable), always connected (via WiFi, 3G, 4G etc.) and is capable of voice and video communication, internet browsing, geo-location (for search purposes) and that can operate to some extent autonomously. It is widely believed that these types of devices will outnumber any other forms of smart computing and communication in a very short time.
Smart Phones use many different operating systems. 
 The evolution of smart devices has accelerated exponentially since the beginning of this century and this trend continues to grow in significance. Mobility has already become an essential part of our daily lives and the future will certainly bring natural interface between humans and the smart devices that will surrounded them in every environment. Behind this phenomenon is the general desire to not only have easy access to information, but to share that information, to pay for purchases, to access entertainment, to seek products, to buy them and more and all this just by pushing the buttons on a single, handheld device. There is a strong relationship between technology and equipment; they are developing together, side by side, and getting smarter and smarter. These increasingly smarter devices are our interface with a world of technology content, applications and services; they let us interact with technology and reap its benefits. Now, we are looking forward to the general availability of smart communications machine to machine between smart devices.
Technology has squeezed the functions of  four different devices and merged them into one This process of shrinkage is putting a world of functions, and the world itself, in the pockets of more and more people each day. The escalating demand for shrinking devices that combine portability and functionality is pushing the growth of advanced semiconductor manufacturing. Equipment manufacturers are doing extensive research in the field of surface mount device (SMD) manufacturing technologies. SMDs facilitate quick and inexpensive manufacturing of electronic equipment; they are a significant source of competitive advantage in sectors such as consumer electronics, automotive, education, healthcare and other industries as well.
The demand is increasing for a wide variety of micro-devices, so vendors are hard-pressed to furnish everything required to fill global distribution and supply chains.  European Platform for Micro & Nanomanufacturing Shrinking equipment is playing a pivotal role in social and economic development; it broadens access to healthcare, education and other essential social services by providing the platforms that enable organizations to meet community needs.
As ICT technology advances, we see growing connectivity among smart devices computers, mobile phones and even televisions. With the widespread penetration of mobile phones and other handheld devices that connect to the Internet, nearly 4 billion people worldwide have some level of access to computing. Coupled with powerful and feature rich software applications these smart devices are helping to bridge rural urban digital divide. The convergence of device connectivity and software innovation is enabling a greater number of people and organizations around the world to access information and to communicate and collaborate in more powerful ways.
Large emerging markets such as China and India are exploring the potential of smart devices to improve healthcare services. These countries are generating tremendous demand for affordable and reliable smart medical devices to improve the treatment and care of millions of patients. Today, medical device designers are devising new equipment to enhance their diagnostic, monitoring, and treatment capabilities.
They are putting the capabilities of clinical devices into portable units the size of a cell phone. Healthcare sector equipment shrinkage now let healthcare workers carry tools, which once required huge machine installations in hospitals, to provide sophisticated services in remote areas.
The education sector is also readily adopting and and utilizing small, handy, teaching-aids and gadgets. The production of low cost, small, laptops has greatly changed the paradigm of ICT-enabled education, especially in the developing world.
The consumer, though, is more concerned with the issues of reliability, power consumption, security, privacy and safety associated with smart devices. The shrinkage of equipment has changed the world we live in today the way we communicate, network and interact with others and will continue to do so for the foreseeable future.
Many studies had been done to clarify or understand the theoretical relationships between system design for mobile computing, human behaviors, social attributions, and interaction outcome. As conclusions we can say; doubt that our inevitable future is to become a machinelike collective society. How devices are used is not determined by their creators alone. Individuals influence how devices are used, and humans can be tenaciously social creatures. Given the importance of social relationships in our lives, we may adopt only those devices that support, rather than inhibit, such relationships.
With the substantial amount of skepticism related to technology, such findings seem to counterbalance the immediate threat that a thoroughly computerized future appears to hold. However, apart from personal prejudices, the wide range of social consequences that pervasive computing may have will certainly need to be addressed in future systems and debates.
-

There will be 1000 devices per person within a decade? Our surroundings at the office, home and in public places will be almost polluted with embedded electronics. If they are smart enough, the devices will be able to find each other and share information and tasks within a local smart space. A smart space can operate on a stand-alone basis, sometimes even without contacting Internet services. In order to function, smart spaces require a common communication language and semantics between the devices.
From the end point of the vision, we see a ubiquitous world in which it may be difficult to tell the difference between real and virtual objects. People will attend meetings at which participants are spread around the world, with some showing themselves as digital holographic avatar images. The same meeting will be held simultaneously in a virtual life, e.g., in a Second Life, so it will be the people’s own choice how they participate. There will be holographic objects in rooms showing news, information and infotainment.
Communication between people has broadened with the aid of the Internet and other digital communication methods. Social media have become part of people’s everyday lives. People’s voices can be heard more broadly and equally than ever before.
The number of users of social media services continues to grow. For example, Facebook has more than 500 million users, of which 150 million use the service with a smart device. The mobile dimension links users live to the services much more efficiently than over a computer.
Part of the teaching in schools and universities is carried out with video or web courses. Laptops and smart devices are used by students for their personal use but are not integrated into the teacher’s material or teaching not yet, only few are using it.
IT systems are not generally interoperable. For example, heavy integration is needed to pass patient information between healthcare units.
 We have more and more meetings with people around the globe over long distances and less time is wasted travelling. Video, web and phone meetings are a good and cost-effective alternative to travelling.
Young people do not remember a time without the Internet and smart devices. They are ‘connected to the Internet’ from birth. They meet their friends more and more on the net more than in real life.
The interactive media devices have had a change tendency towards psychological and social aspects of human behavior, leading to a more voluntary sharing of personal and private information.
 A tendency to spend an increasing amount of time on the Internet, which is time away from that used to read, play and exercise, etc.
These changes also affect sectors of services like the postal service and media information outlets. There are fewer letters and newspapers to deliver to homes due to the Internet. Less paper bills are sent, instead bills and shopping are paid for directly on the Internet creating an ever growing virtual world. People often read newspapers on line nowadays. The big advantage of this services is not only the speed at which the information is transferred but also the interactive aspect has increase much value as opposed to the one way stream of information provided previously by the different services.
One of the disadvantages is the identification and the security aspect, which are exposed to an almost organic flow of viruses, resulting in persons having different user names and accounts for home, work and hobbies. No one can remember all the passwords and PINs anymore without yellow post-it notes. Smart device and smart card technology provide one practical solution to this challenge.
We will carry a smart device with us at all times which gives us essential information and connectivity to the internet that we can no longer live without. The only thing people would take to a desert island would be their personal device.
In healthcare, it will become the norm to contact doctors and nurses over the Internet, for example, taking a Skype video call to a local hospital in the case of illness.
In a near future we can predict
Schools will disappear as teaching will be carried out everywhere the students are. Schools will not be able to keep up with the hard-paced development of the digital world. Lectures will be consumed at home or on the move.
It will not be considered needed, trendy or righteous to travel. Real life telepresence will be used instead of travel. At first, this will be based on holographic technologies, but later brain implants will give a very realistic feeling of presence anywhere in the world making the basis of needed information to any local presence available. In opposite sense much education and training skills can be provided trough out the world. i.e. under-developed countries, hard accessible places: Jungle, polar regions, remote locations, etc. 
Embedded electronics will be the main source of new electronics in our lives.
Devices will have common ‘languages’ to exchange information and understand each other at a semantic level.
Products will be smart and able to store and present information through their lifecycles. For example, a car will carry information from the factory (who made it, which parts were integrated), on transport (any scratches or drops during the cargo), on a shop (who test drove it), on use (owners, services, faults) and, in the end, which parts can be recycled.  Energy will continue to be a scarce resource, and the production, transfer, storage and use will be optimized with a smart grid and energy harvesting technologies.  Smart devices will provide us with digital sense and means to interact with virtual worlds. Ever-present and fluent connectivity to the Internet will become so self-evident that the providing technologies and wireless networks disappear from the users’ knowledge. Similarly, the cloud will be hidden from the users. Data, applications and services will be there somewhere.
 ||
    
- If some people live in 21century & can buy it; but don’t use smart-phones they are not human!
Cause humanity is depend on Technology, Cause we are Q.Computer.



Also Humans have Operating System.... Read more.




| Gravity - Information - ElectroMagnetic |

What is the Gravity?

Description of Gravity;
|| A few centuries B.C. the Greeks described the first realistic model of the universe. The Earth in the center of it, and a sphere in which the stars where fixed on, was somewhere on the outside realm. This notion explained why the stars appear in the same places each year. The sun and the moon were also moving in circles around the earth. However, there where objects in the sky that seem to wander around without any predictable type of motion; they named these objects planets, from the Greek word for wanderer. Aristotle and Plato put the planets perfectly circling the earth, and that all objects were falling onto the earth. Although this picture nowadays seems ridiculous, one must understand the context of these times: people thought that nature must be absolutely symmetric, perfect*. The only types of motion that are perfect were the straight line and the circle. Also, earth was considered the heaviest of all 4 natural elements (earth, water, fire, air). Therefore the Earth was placed in the center of the universe so that all objects will fall towards it in straight lines. The only objects that couldn't fall in straight lines, the planets and the stars, had to move in circles around the earth.
Copernicus was born in 1473 and was greatly interested in sky observations, becoming one of the most well known astronomers of his time. At the age of 41, he gave to his friends an anonymous manuscript in which he claimed that if instead of the earth, strictly as a mathematical convenience,  the sun was placed in the center of the universe, then Ptolemy's system of epicycles could be greatly simplified. Of course he was afraid of letting his idea out because he would face extreme consequences.
Tycho Brahe, Brahe did not embrace the Copernician view of the universe; he invented one of his own, by placing the earth again at the center of the universe. However the sun now circled the earth, and all the planets circled the sun.
Johannes Kepler , Kepler's theory along with Brahe's data made astronomy 100 times more accurate than ever before. All I want to point out is the fact that in order for a minor experimental fact to be fitted into theory, a huge reconsideration of our understanding of the world must be introduced. That was the first time it ever happened, but it has occurred numerous times since then in the history of science.
Galileo Galilei, introduced for the first time the very essence of physics: experiments should be done (like the ancient Greeks did) but they should be tested with the theory using mathematical language. He created inclined planes with different inclination and let balls to roll on them, measuring the time it takes to go down the slope. He quickly realized that no matter what the slope was, gentler or steeper, given twice the time a ball would travel four times the distance. At that point he made his first huge leap of imagination: that this must be true in the limiting case, even if the plane was completely vertical, so it must be true for free falling bodies too. He managed to work out the mathematics of his experiments and figured that it meant uniform acceleration. Then as a second giant leap on imagination, he thought of bodies falling in the vacuum, an idea unthinkable at that point, for nothing to exist by itself He broke free of the bonds of the Aristotelian thought and stated that the only reason that makes bodies fall at different speeds was the existence of air; if one were to do the experiments in the vacuum all bodies should fall in the same way.
Isaac Newton, he found the laws that explained the previous discoveries of Kepler and Galileo (the laws were actually suspected, but he was the one who proved this was the indeed case). His inverse square law explained the ellipses that the planets make as Kepler realized, and his F=ma law (where F he put the newly founded gravity) explained the motion of the objects on the earth that Galileo had discovered. Second, he generalized his theory: he realized that the same laws that apply here on earth must also apply to the movement of the celestial bodies, and this is how he managed to explain the tides: as an earthly effect of celestial bodies. He made in that way the first great unification, the first out of many that were destined to come: laws that could explain both earthly and celestial phenomena.
It took Albert Einstein up to 10 years to find an answer, He concluded then that mass and energy must affect the gravitational field. Secondly, he stated that the gravitational field is not actually a force as Newton had described, but instead a curvature in space. To put it in simple words, the bodies are affected by gravity not because of a force directly exerted on them but because space is curved and therefore they have to follow space's grid. The presence of mass or energy does not affect the bodies directly; it affects the space first, and then the bodies move in this curved space. Maybe it s difficult to understand but as an analogy we can say being in a sea (or pool) using your finger to create ripples on the water. The presence of your finger in the water creates these waves and alters the geometry of water, it's not flat anymore. If you look through the rippled water you'll see the bottom distorted.
 This idea, that space can be bended by mass, has breathtaking ramifications. Since mass curves space and then the objects just move on this space, there is no reason why for example light couldn't follow space's curvature. Indeed, one of Einstein's first ideas was that light should be able to bend too, when massive objects exist close to it.
After several experiments, experimental teams leaded by Sir Arthur Eddington set off to measure the light deflection at solar eclipse, at 1919. By that time Einstein had figured out the correct equations of his general relativity theory, and found the exact amount of bending. At the crucial day of the solar eclipse the two teams collected the data and then compared them to Einstein's theoretical predictions; the matching was superb for the accuracy they had at the time.
Einstein's theory of gravity has never been disproved until now (2004). Soon after its completion, the theory of quantum mechanics was developed; a description of the world in very small scales. However, general relativity seems to be incompatible with quantum mechanics and breaks down (theoretically). In most of the cases, gravity is so weak that in such small scales it is ignored. However in the interior of a black hole, the huge amount of mass is not negligible. Also, this is the case at the early stages of the universe: ultra-condensed matter, lots of mass suppressed into quantum distances. In these cases, a quantum treatment of gravity will be needed, although there is no way right now to test how exactly general relativity must be modified.
Nowadays, some physicists talk about the new role that the quantum information plays in gravity sets the scene for a dramatic unification of ideas in physics. Some time ago Erik Verlinde at the the University of Amsterdam put forward one such idea which has taken the world of physics by storm. Verlinde suggested that gravity is merely a manifestation of entropy in the Universe. His idea is based on the second law of thermodynamics that entropy always increases over time. It suggests that differences in entropy between parts of the Universe generate a force that redistributes matter in a way that maximizes entropy. This is the force we call gravity.
But perhaps the most powerful idea to emerge from Verlinde's approach is that gravity is essentially a phenomenon of information.
Today, this idea gets a useful boost from Jae-Weon Lee at Jungwon University in South Korea and a couple of scientist. They use the idea of quantum information to derive a theory of gravity and they do it taking a slightly different tack to Verlinde.
At the heart of their idea is the tricky question of what happens to information when it enters a black hole. Physicists have puzzled over this for decades with little consensus. But one thing they agree on is Landauer's principle, that erasing a bit of quantum information always increases the entropy of the Universe by a certain small amount and requires a specific amount of energy.Jae Weon assume that this erasure process must occur at the black hole horizon. And if so, spacetime must organize itself in a way that maximizes entropy at these horizons. In other words, it generates a gravity-like force.
It also relates gravity to quantum information for the first time. Over recent years many results in quantum mechanics have pointed to the increasingly important role that information appears to play in the Universe.
Some physicists are convinced that the properties of information do not come from the behavior of information carriers such as photons and electrons but the other way round. They think that information itself is the ghostly bedrock on which our universe is built.
Gravity has always been a fly in this ointment. But the growing realization that information plays a fundamental role here too, could open the way to the kind of unification between the quantum mechanics and relativity that physicists have dreamed of.
 ||

*They preferred symmetrical universe many years ago. But Einstein preferred symmetrical universe about 100 years ago and of course it was one of the biggest mistake of Einstein that forced him unable to make unification theory. His love to symmetry closed his eyes to reality so it force him to make one another fancy parameter to make his universe symmetrical; time!
He just put parameter of time to his equations to make symmetrical explanation for universe. But in another group they bigot and radical viewpoint in quantic universe forced them not to understand universe exactly. They just didn’t like symmetrical diamond of Einstein and liked to make asymmetrical woody universe! While if you want to combine Quantum Mechanics with General Relativity it just needs to omit one parameter from GR and one opinion from QM.
Omit time from GR and omit materialism from QM (fundamental element of universe is not tiny ball called particle and not fancy string with 26Dimension!!!).
In this way you have one common, observable & real force; ElectroMagnetic.
That we observe it with different effect in different scale.
Quantic effects in small scale in quantum mechanics normal effect of electricity & etc in classical scale and one important effect in large scale that called gravity.
So there s no independent force named gravity it s just EM in large scale without time without Graviton.

-          To be continue…




| Axiom - Scientific Thought - Physics |

Is there any Axiom?!

Description of Axiom
|| In a nutshell, the logic-deductive method is a system of inference where conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference). Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, if we are talking about mathematics) must be proven with the aid of the basic assumptions.
The logic-deductive method was developed by the ancient Greeks, and has become the core principle of modern mathematics. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's Posterior Analytics is a definitive exposition of the classical view.
At the foundation of the various sciences lay certain basic hypotheses that had to be accepted without proof. Such a hypothesis was termed a postulate. The postulates of each science were different. Their validity had to be established by means of real-world experience. Indeed, Aristotle warns that the content of a science cannot be successfully communicated, if the learner is in doubt about the truth of the postulates.
A great lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. This abstraction, one might even say formalization, makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts.

In structuralist mathematics we go even further, and develop theories and axioms (like field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an axiom and a postulate disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. We get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like line and parallel with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience.
When mathematicians employ the axioms of a field, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields and field theory gives correct knowledge in all contexts. It is not correct to say that the axioms of field theory are propositions that are regarded as true without proof. Rather, the Field Axioms are a set of constraints. If any given system of addition and multiplication tolerates these constraints, then one is in a position to instantly know a great deal of extra information about this system. This way, there is a lot of gain for the mathematician.

Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and logic itself can be regarded as a branch of mathematics. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.
In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axiom. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.
It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here the emergence of Russell's paradox, and similar antinomies of naive set theory raised the possibility that any such system could turn out to be inconsistent.

The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an un-provable assertion within the scope of that theory.
It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at this date we have no way of demonstrating the consistency of modern set theory (Zermelo-Frankel axioms). The axiom of choice, a key hypothesis of this theory, remains a very controversial assumption. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo-Frankel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.
We know that the efforts of mathematicians to recover the apodictic certainty of the axioms as ‘self-evident truths’ during the ‘crisis in the foundations’ failed in its task. The last bastion of rationalism finally fell. All this took place during and after the rise of positivism in the nineteenth century and, in one of history’s ironies, while the ground was giving way beneath the rationalist position, positivism was declaring that mathematics was to be excluded from the sciences because it was not empirical.
Simple axioms are combined in our daily life via de laws of logic without realizing, and so, enabling us to create more complex theorems.
||

Is there any Axiom?
In Science of course there s no axiom unless some simple definition but not something without definition.
We must prove all phenomena (natural phenomena or beliefs).
But think that our science is base on some unknown Axioms that we don't why we accept them and how & why we use them! so we have many paradoxes and un-solved problem in science.
To read more see;

-   -       To be continue… 



| Nature - Measure - CasimirEffect |

How the Nature measures?

Description of Casimir Effect by Miriam Strauss;
|| The Casimir effect has become important because of its central role in fundamental physics as well as modern technology. Despite its complete quantum nature, with origins in the zero-point energy of a quantized field, the Casimir force is a macroscopic phenomenon. With the recent development of highly sensitive force measurement techniques, it has been demonstrated and measured with unprecedented precision. Because of its unique dependence on the separation and geometry, the Casimir force is expected to play an important role in modern nano-electro-mechanical systems.
The precision measurement of the Casimir force has also been advanced as a new powerful test for hypothetical long-range interactions, supersymmetry, supergravity and string theory.
Since Einstein gave the energy of gravity to curvature, the zero-point energy had to be explained in other ways. You can have no curvature at a point. The only other field available was the E/M field, and that is the field that zero-point energy has been given to. According to the standard model, virtual particles mediate the E/M field at the most basic level, and it is the summation of that field that is now thought to go to infinity. Specifically, it is the so-called second quantization of the virtual photons that creates the forces at the zero-point.
The Casimir Effect is an attractive force between metal plates that have been demagnetized and that have no electrical charge. No one before or after Casimir has thought that the effect had anything to do with gravity. Since the standard model is postulating that zero-point energy causes the Casimir force, they think that there is no curvature at a point. The energy and force must then come from the E/M field somehow. The other is that everyone working at the top end of physics since 1948 (the year Casimir predicted the effect) had been in QED. They were therefore understandably keen to claim the effect for their own field. Casimir developed his equations from QED assumptions, and he did this before he ran the experiment. That is to say, he predicted the force and predicted that it arose out of quantization.
Casimir developed his equations from QED assumptions, and he did this before he ran the experiment. That is to say, he predicted the force and predicted that it arose out of quantization.
Then he ran the experiment to prove it. He never considered the possibility that the force was gravitational, since his equations were not gravitational. And he never ruled out gravity based on any theoretical or mathematical analysis. His QED analysis gave him a number and he simply sought it in the experiment. When the experiment failed to produce the number he needed, he did not let this deter him, and no one since has let it deter them either.
Also notice how the gravitational field is never once mentioned in any explanation of the Casimir Effect. It was never discussed why the force cannot be gravitational, although you would think that would be the first force to be checked. The Casimir Effect is an attraction; and so, as the first preliminary step the physicists should have exhausted that possibility.
Another reason 20th century physicists didn’t want to fool with the gravitational field with Casimir is that Casimir’s plates are not spheres. They are plates. It isn’t immediately clear, without some work, how one would use Newton’s equation on plates. Newton does not address flat objects in the Principia, and so no standard equation was sitting around waiting to be plugged into.
This makes it very odd that the standard model has preferred to use renormalized equations, in difficult and opaque deviations, in order to give the force to QED, rather than tweek Newton’s equation in simple and transparent ways. Of course if they had done this latter, they could no longer use Casimir as a proof of quantization, wormholes, and so on.
The Casimir effect is caused by the unified field, by both gravity and E/M! (If Gravity exists!?) And since have been shown that Newton’s equation is a compound equation that already includes both fields, we can apply Newton’s equation directly to Casimir, without even un-unifying it or splitting the two fields.
Nonetheless, it is only by continued, careful consideration of such proposals that we can hope to resolve the issue as to whether energy can be extracted from the vacuum, as part of a generalized vacuum engineering. The possibility of vacuum energy extraction, must, of course approach any particular device claim or theoretical proposal with the utmost vigor with regard to verification and validation.
At this point we must of necessity fall back on a quote given by Podolny when contemplating this same issue. "It would be just as presumptuous to deny the feasibility of a useful application as it would be irresponsible to guarantee the success of such an application." Only the future can reveal whether a program to extract energy from the vacuum will meet with success.
 ||


And it is How nature measures.
Nature can measure vacuum of atoms to vacuum of black-holes.
It s just Quantum Fluctuation that effect on two metallic slides or … and force them to move together.
Each action in the universe means measuring.
And Vacuum Fluctuation (QuantumFluctuation) works everywhere you imagine and even you cannot imagine. In Schwarzschild Radius or in empty space between nucleus and electron. But remember there s no exactly empty place and space cause everywhere Bits can exist.
When elementary particles swinging in quantum scales they do measuring and emit Bits.
So nature itself can measure everywhere that want & of course we can use this ability of nature to measure and get information from some places that nowadays are unknown, like BlackHoles, Dark Matter, Dark Energy & etc.
It s possible;
Cause it s real way.
Cause Physics allow us.


-To be continue...
(We gonna research more about what is relevance between Information (Bit) with Forces.
And will present it in an essay; "What is Gravity?")


Floating Shortcuts   for Android





| Reality - Physics - Nature |

Mathematics & Philosophy

Description of Mathematics & Philosophy;
|| So many People have seen Mathematics and Philosophy are so contradicting and without any relation, and many other people indeed see that philosophy maintains the closest relation to mathematics. So, Who is right? and What are the relations between math and philosophy? In what do they differ and in what are they similar?

As a Definition, Mathematics comes from the greek word "mathema", which means the study of natural quantities, shapes, relationships between n-numbers of things and the variations of things with respect to a variation of a given trait based on a given number of axioms and postulates (which are absolute and do not differ from a mathematical system to another). For example, as expressed mathematically, by the simplest way, y=ax, where any value "y" variates as x times varies for every value of x, where a is a constant coefficient called the slope, (the rate of the change of the whole function).

While Philosophy means the study of basic concepts in general that are related to reality, existence, truth, religions, language based on a rational argument and sense of logic. That it doesn't suffice itself with asking "for example" what is the truth? and answering : It is what really happened and what really should have happened. It goes deep inside the answer and asks, why should it have happened this way? Why should it happen from the first place? Why is this really happened? Of course as mentioned above based on a rational argument and logical axioms, (that may differ from a school of thought to another).

Starting with the similarities between mathematics and philosophy, One could say that math is similar to philosophy in its systematic approach using logic, axioms and definitions besides the fact that both relate things to each other. Like philosophy for example says, The existence would not have existed from nothing, because absolute nothing means that nothing will be found, So if one sees an existence he can imply that something found it". Similarly mathematics says that if y is differentiable then y' is differentiable under some given conditions, if the square has 4-9o degrees, and 4 equal sides, then any similar figure is a square.
That is, Both use the deductive method of answering questions, according to some given basic law.
Mathematics, However, Differ than philosophy in A MAJOR DIFFERENCE, That is, the answers that math gives are absolute and certain and undoubtable because most of them are BASICALLY based on nature and observation and abstract logic. For example, 1+1=2, it s certain and absolute for every two things you put besides each other you will get two things in result. or for example, the integral of x is 1/2 x^2 for any function whatever its name is and for every one linear variable it contained whatever the variable was and whatever the function was(in terms of its naming, as g(x), f(x), h(y), z(q) etc.)
While philosophy gives an uncertain answer that remains for every time a doubtable answer.
Another difference is that, philosophy's axioms that it moves from are not abstract, rational (sometimes) but not abstract. The philosopher moves from what he likes to prove or disprove using arguments that are convincing but are not true some of the times. That s why you see a diversity in the opinions about one philosophical issue like creation and existence, like the mind and its capabilities & etc.
While the fact that mathematics is true and absolute comes from the fact that mathematics as mentioned above moves from given axioms based on natural observation, of course the mathematician being abstract and searching for the truth not for what he likes to prove or disprove but in generalized form. So, in mathematics what is stated as a theorem is absolute and verified for every case. Still sometimes, postulates can be broken but both are true in the given frame of reference, like in differential geometry, when Riemann rejected the fact that space-time is flat and broken the inherited Euclidean Postulates about lines and planes, HOWEVER, BOTH ARE TRUE, EUCLID AND RIEMANN, According to the space you are in.
In Concluding this comparison, We restate the similarities: They are similar in their logical (sometimes in philosophy) approach and search based on axioms and given postulates.
And they differ in: Math is absolute while philosophy is not, it can be still doubted and disproved.
Math is 100% rational and abstract, while philosopher is not, mathematician searches for the truth, philosophers create a truth based on a imagination and make a imagination is a truth based on convincing arguments.
||


However both Mathematics & Philosophy claim that study reality and nature but in fact they are just fancy Cause they talk about some rubbish thing If u don't suppose any real thing (spatial & Geometrical & Observable) for mathematicals equations all equations will be rubbish. And lots of speeches with lots of defficult words! in philosophy with too many ISM are rubbish cause no relevance between them and reality. lots of ism with lots of different rules and laws to shape your mind means you are not free with philosophy u must be pet of some others people that control and ride you!
Absolute answer of mathematics & doubtable answer of philosophy!
there s no certainty in real world cause Uncertainty principle governs.
and doubtable answers of philosophy means no powerful and strong Idea in philosophy (everyone can have own ism!)
But in Physics, physicists must think and simulate & imagine lots of thing and compare them with observable environments then to summarize their achievements use mathematical equations.
It s Science.





| Hyper Dimension - Information - Geometry |

What is Dimension?

Description of Hyper Dimension & String Theory;
|| The revolutionary concept of string theory is a bold realization of Einstein's dream of an ultimate explanation for everything from the tiniest quanta of particle physics to the entire cosmos itself. String theory unifies physics by producing all known forces and particles as different vibrations of a single substance called superstrings. String theory brings quantum consistency to physics with an elegant mathematical construct that appears to be unique.
The strings themselves are probably too tiny to observe directly, but string theory makes a number of testable predictions. It implies super-symmetry and predicts seven undiscovered dimensions of space, dimensions that would give rise to much of the mysterious complexity of particle physics. Testing the validity of string theory requires searching for the extra dimensions and exploring their properties. How many are there? What are their shapes and sizes? How and why are they hidden? And what are the new particles associated with the extra dimensions?
The physical effects of extra dimensions depend on their sizes and shapes, and on what kinds of matter or forces can penetrate them. The sizes of the extra dimensions are unknown, but they should be related to fundamental energy scales of particle physics: the cosmological scale, the density of dark energy, the electroweak scale, or the scale of ultimate unification. It may be possible to infer extra dimensions of macroscopic size from inconsistencies in cosmological observations, or from precision tests of short range gravitational forces. More likely, the extra dimensions are microscopic, in which case high-energy particle accelerators and cosmic ray experiments are the only ways to detect their physical effects.
The LHC and a Linear Collider will address many questions about extra dimensions, How they hidden? What are the new particles associated with extra dimensions? Through the production of new particles that move in the extra space, the LHC will have direct sensitivity to extra dimensions 10 billion times smaller than the size of an atom. A Linear Collider would determine the number, size and shape of extra dimensions through their small effects on particle masses and interactions. There is also a chance that, due to the existence of extra dimensions, microscopic black holes may be detected at the LHC or in the highest energy cosmic rays.
Extra space dimensions aren’t easy to imagine, in everyday life, nobody ever notices more than three. Any move you make can be described as the sum of movements in three directions up-down, back and forth, or sideways. Similarly, any location can be described by three numbers (on Earth, latitude, longitude and altitude), corresponding to space’s three dimensions.
Other dimensions could exist, however, if they were curled up in little balls, too tiny to notice. If you moved through one of those dimensions, you’d get back to where you started so fast you’d never realize that you had moved.
An extra dimension of space could really be there, it’s just so small that we don’t see it.
Something as tiny as a subatomic particle, though, might detect the presence of extra dimensions, certain properties of matter’s basic particles, such as electric charge, may have something to do with how those particles interact with tiny invisible dimensions of space.
Under these conditions, the Big Bang that started the baby universe growing 14 billion years ago blew up only three of space’s dimensions, leaving the rest tiny. Many theorists today believe that 6 or 7 such unseen dimensions await discovery.
About dimensions we can see as an example the Kaluza Klein Theory in 1920, that in the original one the extra space dimension was rolled up into a tiny circle, existing everywhere but too small to see with existing instruments. The vibrations of the gravitational field in the rolled up extra space dimension would look to observers like vibrations in an electromagnetic field and a scalar field in the remaining three space and one time dimensions.
In string theory, Kaluza-Klein compactification of the extra dimensions has one important difference from the particle theory version. A closed string can get wound several times around a rolled up dimension. When a string does this, the string oscillations have a winding mode. The winding modes add asymmetry to the theory not present in particle physics.
Although Kaluza's revolutionary idea had promise, it was abandoned in the wake of the quantum-mechanics craze that swept physics in the 1920s. However, by the early 1970s the standard model was in place and most physicists felt as though everything important about the non-gravitational forces had been discovered. After numerous setbacks regarding the merger of quantum mechanics and general relativity, physicists became receptive to unusual ideas including the resurrection of Kaluza Klein theory. It was suggested that Kaluza's one additional spatial dimension was not enough additional extra dimensions were needed to fully unify relativity and electromagnetism. Theories called higher-dimensional super-gravity theories sprang up, partially incorporating gravity, super-symmetry, and higher dimensions. These helped to calm the quantum fluctuations barring a sensible theory of gravity, but did not do enough to eliminate the problematic infinities associated with the attempt.
Strings vibrate through all the spatial dimensions, meaning that their precise vibrational pattern is affected by the shape of the curled-up dimensions as well as the extended ones. It follows from this statement that the shape of the curled-up dimensions has an indirect effect on the observed elementary particles, since these are merely reflections of different vibrational patterns.
Latest news published in the New Scientist magazine, Stephen Hawking has came up with a way to describe the universe that suggests it may have the same geometry as mind-boggling images by M. C. Escher.
The universe may have the same surreal geometry as some of art's most mind-boggling images. The finding may delight fans of Dutch artist M. C. Escher, but Hawking's team claim that their study provides a way to square the geometric demands of string theory, a still-hypothetical "theory of everything", with the universe we observe.
Their calculations rely on a mathematical twist that was previously considered impossible. If it stands up, it could explain how the universe emerged from the big bang and unite gravity and quantum mechanics.

((Use some fancy and un-scientific thing to explain scientific theory and phenomena of nature is one of the silliest job always some scientists like Hawking do.
They suppose lots of undefined and fancy thing to solve paradoxes but they just make more paradoxes.))

It is also expanding at an accelerating rate, because of a mysterious entity known as dark energy. We don't know what dark energy is or where it came from, but the mathematical language provided by Einstein's theory of general relativity has a way to describe this accelerated expansion. Sticking a constant known as the cosmological constant into the general relativity equations that keeps the universe expanding forever, but only if the constant has a positive sign. Until now, saying we live in an ever expanding universe has been the same as saying our universe has a positive cosmological constant & viceversa!
There are some outstanding problems, however. General relativity covers this aspect of the universe, but it can't describe the big bang. Nor can it unite gravity, which works on large scales, with quantum mechanics, which works on too small scales.
That means you cannot predict why we live in the universe that we live in.
String theory, in the meantime, offers a beautifully complete picture of the universe's history and connects gravity to quantum mechanics but is most comfortable in a universe with a negatively curved, Escher like geometry and with a negative cosmological constant.
This left physicists with a deep chasm to cross, on one side is a universe that works but lacks a complete theory, and on the other is a complete theory that doesn't describe the actual universe.
Overlooking, in search of the "one" unified field theory, it seems the outcome has to include the dual symmetry of both the positive and the negative constants together to form the "One".

((But they cannot find last theory of everything cause they think universe is symmetrical But of course it s NOT symmetrical.))
||


It s all about Matrix!
That when scientists illustrated spatial coordinates by matrix. Then each spatial (Geometrical) parameter became like normal parameters & so normal (Mathematical) parameters became like spatial parameters!
They discovered some new complex and advance phenomena then they didn’t want to make new advance equations so tried use matrixes and put all new parameters in one place and then solve equations.
When they needed time, they entered it as new parameter (new dimension) into matrix of [x y z] > [x y z t] after that at the moment they thought that new parameter required they just named it by Latina Alphabets or new symbol and entered to matrix. They was thought for example if they have new parameter with some required properties and enter it into equation, can solved easily or solve paradoxes! So they just made dimensions (parameters) and suppose new universe with HyperD.
(3D, 4D, 6D, 11D, 26D, nD !!!)
For example they couldn’t explain what is string theory?! So made HyperD to explain it then they couldn’t explain HyperD then began to create new fancy story! That HyperDs are maybe too large or maybe too tiny so we cannot observe them but we should accept!

Because of these fancy story they cannot define GUTheory.
They couldn’t explain universe by symmetrical theory then made Super-symmetry instead of ASYMMETRY!
It s stupidity! They just persist on some things they like and don’t want to see reality. They like symmetry so make lots of symmetrical theory that all of them cannot explain universe and cannot be useful cause our universe is Chaotic.
You cannot explain Chaotic Universe by Symmetrical theory.

But in information viewpoint not need to define HyperD and even 3D. cause for basis of universe (Information) space and place and time is meaningless.
Just one Real Physical parameter > Bit.