Subscribe:

Pages

Sunday, August 30, 2009

Prolog

Logic programming is a programming paradigm based on mathematical logic. In this paradigm the programmer
specifies relationships among data values (this constitutes a logic program) and then poses queries to
the execution environment (usually an interactive interpreter) in order to see whether certain relationships
hold. Putting this in another way, a logic program, through explicit facts and rules, defines a base of knowledge
from which implicit knowledge can be extracted. This style of programming is popular for data base
interfaces, expert systems, and mathematical theorem provers. In this tutorial you will be introduced to
Prolog, the primary logic programming language, through the interactive SWI-Prolog system (interpreter).
You will notice that Prolog has some similarities to a functional programming language such as Hugs. A
functional program consists of a sequence of function definitions — a logic program consists of a sequence
of relation definitions. Both rely heavily on recursive definitions. The big difference is in the underlying
execution “engine” — i.e., the imperative parts of the languages. The execution engine of a functional
language evaluates an expression by converting it to an acyclic graph and then reducing the graph to a
normal form which represents the computed value. The Prolog execution environment, on the other hand,
doesn’t so much “compute” an answer, it “deduces” an answer from the relation definitions at hand. Rather
than being given an expression to evaluate, the Prolog environment is given an expression which it interprets
as a question:
For what parameter values does the expression evaluate to true?
You will see that Prolog is quite different from other programming languages you have studied. First,
Prolog has no types. In fact, the basic logic programming environment has no literal values as such. Identifiers
starting with lower-case letters denote data values (almost like values in an enumerated type) while all other
identifiers denote variables. Though the basic elements of Prolog are typeless, most implementations have
been enhanced to include character and integer values and operations. Also, Prolog has mechanisms built
in for describing tuples and lists. You will find some similarity between these structures and those provided
in Hugs.
CSCI

The Evolution of Lisp

Lisp is the world’s greatest programming language—or so its proponents think. The
structure of Lisp makes it easy to extend the language or even to implement entirely new
dialects without starting from scratch. Overall, the evolution of Lisp has been guided
more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness
that is characteristic of the “hacker culture” than by sober assessments of technical
requirements. Nevertheless this process has eventually produced both an industrialstrength
programming language, messy but powerful, and a technically pure dialect,
small but powerful, that is suitable for use by programming-language theoreticians.
We pick up where McCarthy’s paper in the first HOPL conference left off. We trace
the development chronologically from the era of the PDP-6, through the heyday of
Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines,
to the present era of standardization activities. We then examine the technical evolution
of a few representative language features, including both some notable successes and
some notable failures, that illuminate design issues that distinguish Lisp from other
programming languages. We also discuss the use of Lisp as a laboratory for designing
other programming languages. We conclude with some reflections on the forces that
have driven the evolution of Lisp.

Friday, August 7, 2009

Applications of integrated Image and Ladar Sensors for UAV's

Abstract:

Small unmanned air vehicles UAV's have the potential to provide real-time surveillance information for a wide range of applications in a relatively low-cost and low risk manner. Potential applications for these include search and rescue, coastal surveillance, fire spotting, and defence.

Saturday, August 1, 2009

MAGNATORESISTIVE RAM (MRAM)

MRAM that uses magnetic properties to store data. This new type of chip will compete with other forms established forms of semiconductor memory, such as Flash and RAM.Most engineers believe that the technology, called MRAM could reduce the cost and power consumption of electronics for cell phones, music players, laptops, and servers. The feature that makes MRAM an alluring alternative to other forms of semiconductor memory is the way it stores data. Flash memory and RAM, for example,Hold information as electric charge. In contrast, MRAM uses the magnetic orientation Of electrons to represent bits. Hold data with out a power supply and can be written to and read from an unlimited number of times. Reading and writing data from MRAM is also fast, taking a matter of nanoseconds. MRAM is able to hold data without power.

COOPERATIVE LINUX

Cooperative Linux, abbrieviated as coLinux, is software that lets Microsoft Windows cooperate with the Linux kernel to run both in parallel on the same machine. Cooperative Linux utilizes the concept of a Cooperative Virtual Machine (CVM). In contrast to traditional VMs, the CVM shares resources that already exist in the host OS. In traditional (host) VMs, resources are virtualized for every (guest) OS. The CVM gives both OSs complete control of the host machine while the traditional VM sets every guest OS in an unprivileged state to access the real machine. The term \"cooperative\" is used to describe two entities working in parallel. In effect Cooperative Linux turns the two different operating system kernels into two big coroutines. Each kernel has its own complete CPU context and address space, and each kernel decides when to give control back to its partner. However, while both kernels theoretically have full access to the real hardware, modern PC hardware is not designed to be controlled by two different operating systems at the same time. Therefore the host kernel is left in control of the real hardware and the guest kernel contains special drivers that communicate with the host and provide various important devices to the guest OS

Elliptical Curve Cryptography

Elliptical curve cryptography (ECC) is a public key encryption technique based on elliptic curve theory that can be used to create faster, smaller, and more efficient cryptographic keys. ECC generates keys through the properties of the elliptic curve equation instead of the traditional method of generation as the product of very large prime numbers. The technology can be used in conjunction with most public key encryption methods, such as RSA, and Diffie-Hellman. ECC can yield a level of security with a 164-bit key that other systems require a 1,024-bit key to achieve. Because ECC helps to establish equivalent security with lower computing power and battery resource usage, it is becoming widely used for mobile applications. ECC was developed by Certicom, a mobile e-business security provider, and was recently licensed by Hifn, a manufacturer of integrated circuitry and network security products. Many manufacturers, including 3COM, Cylink, Motorola, Pitney Bowes, Siemens, TRW, and VeriFone have included support for ECC in their products.

Datagram Congestion Control Protocol (DCCP)

Fast-growing Internet applications like streaming media and telephony prefer timeliness to reliability, making TCP a poor fit. Unfortunately, UDP, the natural alternative, lacks con- gestion control. High-bandwidth UDP applications must im- plement congestion control themselves a difficult task or risk rendering congested networks unusable. We set out to ease the safe deployment of these applications by designing a congestion-controlledunreliable transport protocol. The out- come, the Datagram Congestion Control Protocol or DCCP, addstoaUDP-like foundation the minimum mechanisms necessary to support congestion control.We thought those mechanisms would resembleTCPs,but without reliability and,especially,cumulative acknowledgements,we had to reconsider almost every aspect of TCPs design. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCPs reliable byte stream semantics intert wine with it so thermechanisms,including congestion control.

NEW SENSOR TECHNOLOGY

Scientists have developed and demonstrated a fluorescence-based chemical sensor that is more compacting, versatile and less expensive than existing technology of its kind. The new sensor holds promise for myriad potential applications, such as monitoring oxygen, inorganic gases, volatile organic compounds, biochemical compounds. Selecting the right sensors is critical to implementing any military control-based subsystem in which the key factors are accuracy, precision, the ability to meet the environmental range of the intended application and cost.Mainly used two techniques are sensor web and video sensor technology. The Sensor Web is a type of sensor network or geographic information system (GIS) that is especially well suited for environmental monitoring and control.An amorphous network of spatially distributed sensor platforms (pods) that wirelessly communicate with each other. This amorphous architecture is unique since it is both synchronous and router-free, making it distinct from the more typical TCP/IP-like network schemes. The architecture allows every pod to know what is going on with every other pod throughout the Sensor Web at each measurement cycle. The word video sensor (also video-sensor or videosensors) describes a technique of digital image analysis. A video sensor is an application software, which supports the interpretation of digital images and frame rates. Video sensors emerge by programming digital algorithms. The carrier platform of a video sensor is a computer, which in turn is usually equipped with a Linux or Microsoft operating system. Video sensors are being installed on top of one of the mentioned operating systems. And in combination with the carrier platform, it represents a video sensor system. Video sensors are being used to evaluate scenes und sequences within an image section of a (CCD)camera.

JAVA MANAGEMENT EXTENSION (JMX)

Java Management Extensions (JMX) technology provides the tools for building distributed, Web-based, modular and dynamic solutions for managing and monitoring devices, applications, and service-driven networks. This standard is suitable for adapting legacy systems, implementing new management and monitoring solutions, and plugging into those of the future. JMX components are defined by the Java Management Extensions Instrumentation and Agent Specification. JMX is a standard for managing and monitoring all aspects of software and hardware components from Java. JMX defined three levels of entities including, 1) Instrumentation, which are the resources to be managed, 2) Agents, which are the controllers of the instrumentation level objects, and 3) Distributed Services, the mechanism by which administration applications interact with agents and their managed objects.

WIRELESS MESH NETWORKS

As various wireless networks evolve into the next generation to provide better services, a key technology, Wireless Mesh Networks (WMNs), has emerged recently. In WMNs, nodes are comprised of mesh routers and mesh clients. Each node operates not only as a host but also as a router, forwarding packets on behalf of other nodes that may not be within direct wireless transmission range of their destinations. A node can send and receive messages, and in a mesh network, a node also functions as a router and can relay messages for its neighbors. Through the relaying process, a packet of wireless data will find its way to its destination, passing through intermediate nodes with reliable communication links. A mesh network offers multiple redundant communications paths throughout the network. If one link fails for any reason, the network automatically routes messages through alternate paths. A WMN is dynamically self-organized and self-configured, with the nodes in the network automatically establishing and maintaining mesh connectivity among themselves (creating, in effect, an ad hoc network). This feature brings many advantages to WMNs such as low up-front cost, easy network maintenance, robustness, reliable service coverage and also provides a flexible architecture

DNA AND DNA COMPUTING IN SECURITY

As modern encryption algorithms are broken, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography and steganography has been identified as a possible technology that may bring forward a new hope for unbreakable algorithms. Is the fledgling field of DNA computing the next cornerstone in the world of information security or is our time better spent following other paths for our data encryption algorithms of the future? This paper will outline some of the basics of DNA and DNA computing and its use in the areas of cryptography, steganography and authentication. Research has been performed in both cryptographic and steganographic situations with respect to DNA computing but researchers are still looking at much more theory than practicality. The constraints of its high tech lab requirements and computational limitations combined with the labour intensive extrapolation means, illustrate that the field of DNA computing is far from any kind of efficient use in todays security world. DNA authentication on the other hand has exhibited great promise with real world examples already surfacing on the marketplace today

BEHAVIORAL CLONING

Controlling a complex dynamic system such as a plane or a crane usually requires a skilled operator. Such a control skill is typically hard to reconstruct through introspection. Therefore an attractive approach to the reconstruction of control skill involves machine learning from operators control traces also known as behavioral cloning. Behavioral cloning is a method by which a machine learns control skills through observing what a human controller would do in a certain set of circumstances. It seeks to build a robust and explainable model by learning from the traces of a skilled operators behavior.



More details :
See full size image


Ferroelectric Random Access Memory

Ferroelectric Random Access Memory Before the 1950s, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAMs, EPROMs, and FRAMs). Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute. The term ferroelectric indicates the similarity, despite the lack of iron in the materials themselves. Ferroelectric memory is a new type of semiconductor memory, which exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demands many memory write operations. A ferroelectric memory technology consists of a complementary metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors. A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one transistor that provide access to the capacitor or amplify its content for a read operation. Once a cell is accessed for a read operation, its data are presented in the form of an analog signal to a sense amplifier, where they are compared against a reference voltage to determine their logic level. Ferroelectric memories have borrowed many circuit techniques (such as folded-bit line architecture) from DRAMs due to similarities of their cells and DRAMs maturity. Some architectures are reviewed here

Chatterbot

A chatterbot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database. Like any person, chatterbots seem to have a sort of personality which is expressed by their answers. The top chatterbots are Elbot, Talk-bot, Yabberwacky, Eugene, Alice, and Alan. A chatterbot is a conversation simulator done as a computer program which gives the appearance of conversing with a user in natural language.

IP Multimedia Subsystem (IMS)

IP Multimedia Subsystem (IMS) is a generic architecture for offering multimedia and voice over IP services, defined by 3rd Generation Partnership Project (3GPP). IMS is access independent as it supports multiple access types including GSM, WCDMA, CDMA2000, WLAN, Wire line broadband and other packet data applications. Existing phone systems (both packet-switched and circuit-switched) are supported. IMS will make Internet technologies, such as web browsing, e-mail, instant messaging and video conferencing available to everyone from any location. It is also intended to allow operators to introduce new services, such as web browsing, WAP and MMS, at the top level of their packet-switched networks.

Earth Simulator

Earth Simulator is the fastest supercomputer in the world. NEC had first built this Japanese machine. Earth simulator uses Parallel Vector Architecture to achieve a peak performance of 40 F Flops. This system configured in 640 nodes of 8 vector processors each connected together by crossbar switch. Each node has a shared memory of 16 GB (total 10 TB). This Japanese machine was built to analyze climate change, including global warming, as well as weather and earthquake patterns. Earth simulator has the power to create a “virtual planet earth” using its large processing capability. The vector processor used in this is fabricated in a single chip with 0.15-micron CMOS technology.



More details :

See full size image

ZFS Filesystem

ZFS: the last word in file systems Most system administrators take the limitations of current file systems in stride. After all, file systems are what they are: vulnerable to silent data corruption, brutal to manage, and excruciatingly slow. ZFS, the dynamic new file system in Sun\'s Solaris 10 Operating System (Solaris OS), will make you forget everything you thought you knew about file systems. It offers: Simple administration ZFS automates and consolidates complicated storage administration concepts, reducing administrative overhead by 80 percent. Provable data integrity ZFS protects all data with 64-bit checksums that detect and correct silent data corruption. Unlimited scalability As the world\'s first 128-bit file system, ZFS offers 16 billion billion times the capacity of 32- or 64-bit systems. Blazing performance ZFS is based on a transactional object model that removes most of the traditional constraints on the order of issuing I/Os, which results in huge performance gains

More details :

See full size image

Holographic Versatile Disc

Holographic Versatile Disc (HVD) is an optical disc technology still in the research stage which would greatly increase storage over Blu-ray and HD DVD optical disc systems. It employs a technique known as collinear holography, whereby two lasers, one red and one blue-green, are collimated in a single beam. The blue-green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc while the red laser is used as the reference beam and to read servo information from a regular CD-style aluminium layer near the bottom.

Intel Virtualization Technology

Virtualizing a computing systems physical resources to achieve improved sharing and utilization. Full virtualization of all system resources—including processors, memory, and I/O devices—makes it possible to run multiple operating systems on a single physical platform. Virtualization has been there in the form of emulation software But Intel has given birth to a new technology called Intel virtualization technology(VT) which will allow OSes to run natively on the hardware. In a non-virtualized system, a single OS controls all hardware platform resources. An Intel virtualized system includes a new layer of software, the virtual machine monitor (VMM) or hypervisor. The VMM\'s principal role is to arbitrate accesses to the underlying physical host platform\'s resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Intel Virtualization Technology can be very useful for uptime and security purposes. If there are five copies of Redhat running Apache . if the fifth one goes down we can simply pass incoming requests to the other four while the fifth is reloading. Thus increasing the uptime.If we save \'snapshots\' of a running OS, we can reload it every time something unpleasant happens like hacking, Viruses. Reload the image from a clean state and patch it up, quick. We can simply load, unload and save OSes like programs. VT-x and VT-i are the first components of Intel Virtualization Technology, a series of processor and chipset innovations soon to become available in IA-based client and server platforms.

CELL PHONE VIRUSES AND SECURITY

Cell phones have become powerful and sophisticated computing devices and are moving toward an always on form of networking. However, such powerful networked computers are also at risk for a new class of malware, including viruses, worms and trojans specifically designed for a mobile environment. This seminar topic covers a taxonomy of attacks against mobile phones that shows known as well as potential attacks. Understanding existing threats against mobile phones helps us better protect our information and prepare for future dangers. Security experts are finding a growing number of viruses, worms, and Trojan horses that target cellular phones. Security researchers attack simulations have shown that before long, hackers could infect mobile phones with malicious software that deletes personal data or runs up a victim\'s phone bill by making toll calls. The attacks could also degrade or overload mobile networks, eventually causing them to crash, financial data stealing, and Risk factors for smart phones. Mobile-device technology is still relatively new, and vendors have not developed mature security approaches. An introduction to security concerns in mobile appliances. To counter the growing threat, antivirus companies have stepped up their research and development. In addition, vendors of phones and mobile operating systems are looking for ways to improve security. Recent innovations and emerging commercial technologies that address these issues are also incorporated in the topic

Simultaneous Multithreading

Simultaneous multithreading, often abbreviated as SMT, is a technique for improving the overall efficiency of superscalar CPUs. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. Simultaneous multithreading allows multiple threads to execute different instructions in the same clock cycle, using the execution units that the first thread left spare. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads can be decided by the chip designers, but practical restrictions on chip complexity usually limit the number to 2, 4 or sometimes 8 concurrent threads.

CYBORG

Cyborg is a cybernetic organism. Cybernetics is a study of communication and control of machines, organism or a mixture of both. Cyborg is a mixture of organism and technology. An organism with any type of enhancement can be called cyborg. It is half living and half machine. The process of becoming a cyborg is called cyborgation. There are different type of cyborg such as cyborg with artificial hearts, cyborgs with retinal implants, cyborgs with brain sensors cyborgs with bionic arms etc. Also animal cyborgs such as cyborg eel, robo roach etc. Cyborg find there application mainly in military, medicine etc. Most of the soldiers in US military are undergoing cyborgation. Cyborgation is used for treatment of Parkinson’s disease, for people with vision problem etc.

QUANTUM DOT LASERS

Most modern semiconductor lasers operate based on quantum mechanical effects. Quantum well lasers have been used with impressive performance, while the novel quantum dot lasers, a subject of intense research, show a great promise. Lasers come in many sizes and can be made from a variety of resonant cavities and active laser materials. Generally, increasing confinement enforces an increasing quantization in the energy of electrons. Therefore quantum dots will re-emit light at nearly a single wave length. Quantum dots are therefore a good starting point for producing laser light

Content Srambling System

Content Scramble System (CSS) is an encryption system used on some DVDs. It uses a weak, proprietary 40-bit encryption stream cipher algorithm. The system was introduced circa 1996.. The CSS key sets are licensed to manufacturers who incorporate them into products such as DVD drives, DVD players and DVD movie releases. Most DVD players are equipped with a CSS Decryption module. CSS key is a collective term for authentication key, disc key, player key, title key, second disk key set, and/or encrypted key.

AUTOSAR

AUTOSAR(Automotive Open System Architecture) The idea behind AUTOSAR is to create an open standard for fundamental software system functions that replaces proprietary standards. This creates a plug-and-play environment where software modules slot into the overall electronic architecture without unexpectedly disrupting others The objective of the partnership is the establishment of an open standard for automotive E/E architecture. It will serve as a basic infrastructure for the management of functions within both future applications and standard software modules. The goal is to have a standardized tool chain that will guide and assist the design engineer through the complete process.

Software Radio

Software radio is the art and science of building radios using software.Given the constraints of todays technology,there is still some hardware involved,but the idea is to get the software as close to the antenna as possible.Ultimately we are turning hardware problems into software problems. A software radio is a radio whose channel modulation waveforms are defined in sofware.The fundamental characteristic is that software defines the transmitted waveforms and software demodulates the recieved waveforms. That is waveforms are generated as sampled digital signals,converted from digital to analog via a wideband DAC and then possibly upconverted from IF to RF.The reciever,similarly,employa a wideband ADC that captures all of the chnnels of the software radio node.The reciever then extracts,downconverts and demodulates the channel waveform using software on a general purpose computer.

OpenRAN

Cellular telephony networks depend on an extensive wired network to provide access to the radio link. The wired network, called a radio access network, provides such functions as power control and, in CDMA networks, combination of soft handoff legs (also known as macro diversity resolution) that require coordination between multiple radio base stations and multiple mobile terminals. Existing RAN architectures for cellular systems are based on a centralized radio network controller connected by point-to-point links with the radio base transceiver stations. The existing architecture is subject to a single point of failure if the RNC fails, and is difficult to expand because adding an RNC is expensive. Also, although a network operator may have multiple radio link protocols available, most RAN architectures treat each protocol separately and require a separate RAN control protocol for each. In this article we describe a new architecture, the OpenRAN architecture, based on distributed processing model with a routed IP network as the underlying transport fabric. OpenRAN was developed by the Mobile Wireless Internet Forum IP in the RAN working group. The OpenRAN architecture applies principles to the radio access network that have been successful in reducing cost and increasing reliability in data communications networks. The result is an architecture that can serve as the basis for an integrated next-generation cellular radio access network. Access to the radio link between multiple radio base stations and between mobile terminals. In this article we discuss a new architecture for mobile wireless RANs. The architecture, called the OpenRAN, is based on a distributed processing model with a routed IP network as the underlying transport fabric

VoIP in Mobile Phones

Today is the world of mobility. Only thing is that is true mobile is Mobile Phoens. Calling from mobile phones are much expencive. Cheapest calling method is pc to pc calling. It wont cost anything because it VoIP. In this semianr we look into implementing VoIP in mobile phones. The diffrent network like GPRS/EDGE, Bluetooth, WiFi etc are common in mobile phone. We look into each and advanteages and disadvantages of each.

Distributed Quota Enforcement for Spam

Spam, by overwhelming inboxes, has made email a less reliable medium than it was just a few years ago. Spam filters are undeniably useful but unfortunately can flag non-spam as spam. To restore email\'s reliability, a recent spam control approach grants quotas of stamps to senders and has the receiver communicate with a well-known quota enforcer to verify that the stamp on the email is fresh and to cancel the stamp to prevent reuse. . The literature has several proposals based on this general idea but no complete system design and implementation that: scales to today\'s email load (which requires the enforcer to be distributed over many hosts and to tolerate faults in them), imposes minimal trust assumptions, resists attack, and upholds today\'s email privacy. DQE\'s enforcer occupies a point in the design spectrum notable for simplicity: mutually untrusting nodes implement a storage abstraction but avoid neighbor maintenance, replica maintenance, and heavyweight cryptography.The DQE`s are based on a managed distributed hash table (DHT) interface, showing that it can be used in conjunction with electronic stamps (for quota allocation) to ensure that any non-negligible reuse of stamps will be detected

Extensible Firmware Interface

A BIOS Alternative There has been rapid evolution of the personal computer platform since the 1980s. But there is one element of the PC that has not changed for the past years, namely the BIOS (basic input/output system). Extensible Firmware Interface (EFI) is the name for a system developed by Intel that is designed to replace the aging BIOS system used by personal computers. It is responsible for the power-on self-test (POST) process, bootstrapping the operating system, and providing an interface between the operating system and the physical hardware. The Intel Platform Innovation Framework for the Extensible Firmware Interface (referred to as \"the Framework\") is Intel\'s recommended implementation of the EFI Specification for platforms based on all members of the IntelĂ‚® Architecture (IA) family. It offers an opportunity to provide an alternative to BIOS that will allow for faster booting, manageability, and additional features.

INTERPLANETARY INTERNET

INTERPLANETARY NETWORK Ten years ago few people had heard of the Internet. Even 5 years ago it was viewed by many as a technological curiosity - some thought it to be a passing fad. Ten years from now the Internet could be a phenomenon that has expanded beyond Earth to form an interplanetary network of Internets reaching to Mars and beyond. That is the vision of Vint Cerf and his colleagues at the Interplanetary Internet (IPN) team Cerf co-invented TCP/IP in 1973 and is often called a \"father of the Internet.\" He got the idea for an interplanetary extension of the Internet in 1997 and is now working with engineers at NASA\'s Jet Propulsion Laboratory (JPL) in Pasadena, California, to make it real. IPN is a communication system to provide Internet-like services across interplanetary distances in support of deep space exploration. It will be required for communication among planets, satellites, asteroids, robotic spacecraft and crewed vehicles. There are three main objectives of this project. v The first is to be able to lower the cost of space exploration in general by adopting communications techniques that are closely related to those used on Earth and are highly standardized. Standard systems are inherently lower cost and, properly implemented, they can bring maturity and reliability to the deep space enterprise. v Secondly, we want to use the harsh environment of space exploration to research new communications techniques, some of which may spin-off into new Earth capabilities. v Thirdly, we want to make it easier for the general public - via the World Wide Web - to be able to participate in the excitement of exploring space \"in person\". To say the least, the Interplanetary Internet is a formidable project, but one with countless possibilities for the future. Deep space exploration will involve much more complex exploration CHARACTERISTICS Communication in outer space is done by the modulation of radiated energy, and sometimes a planet will be between the source and the destination. Therefore we cannot rely on end-to-end connectivity at any time, for the universe does not work that way. The communication medium used is not copper cable and optical fiber like in wired internets. The medium associated with IPN is free-space RF In the wired Internets routing infrastructure is fixed. But in IPN the infrastructure is deployable and mobile. We cannot rely on ample bandwidth because power is scarce out there and the bit error rates are high. Launching mass into interplanetary trajectories, injecting mass into orbit, and landing mass into the gravity well of another planet is currently very expensive

PLASMA PANEL DISPLAY

For the past 75 years, the vast majority of displays have been built around the same technology: the cathode ray tube (CRT). Recently, a new alternative has popped up on store shelves: the plasma flat panel display. These displays have wide screens, comparable to the largest CRT sets, but they are only about 6 inches (15 cm) thick. Based on the information in a video signal, the display lights up thousands of tiny dots (called pixels) with a high-energy beam of electrons. In most systems, there are three pixel colors -- red, green and blue -- which are evenly distributed on the screen. By combining these colors in different proportions, the display can produce the entire color spectrum. The basic idea of a plasma display is to illuminate tiny colored fluorescent lights to form an image. Each pixel is made up of three fluorescent lights -- a red light, a green light and a blue light. Just like a CRT television, the plasma display varies the intensities of the different lights to produce a full range of colors. The central element in a fluorescent light is a plasma, a gas made up of free-flowing ions (electrically charged atoms) and electrons (negatively charged particles). Xenon and neon atoms, the atoms used in plasma screens, release light photons when they are excited. These photons are used to illuminate the pixels accordingly.

OVONIC UNIFIED MEMORY

Ovonic unified memory (OUM) is an advanced memory technology that uses a chalcogenide alloy (GeSbTe).The alloy has two states: a high resistance amorphous state and a low resistance polycrystalline state. These states are used for the representation of reset and set states respectively. The performance and attributes of the memory make it an attractive alternative to flash memory and potentially competitive with the existing non volatile memory technology. OUM, offers significantly faster write and erase speeds and higher cycling endurance than conventional Flash memory. OUM also has the advantage of a simple fabrication process that permits the design of semiconductor chips with embedded nonvolatile memory using only a few additional mask steps. In this review, the physics and operation of phase change memory will first be presented, followed by discussion of current status of development. Finally, the scaling capability of the technology will be presented. The scaling projection shows that there is no physical limit to scaling down to the 22 nm node with a number of technical challenges being identified.



More details :

See full size image

SYMBIAN OS

Symbian OS is designed for the mobile phone environment. It addresses constraints of mobile phones by providing a framework to handle low memory situations, a power management model, and a rich software layer implementing industry standards for communications, telephony and data rendering. Even with these abundant features, Symbian OS puts no constraints on the integration of other peripheral hardware. This flexibility allows handset manufacturers to pursue innovative and original designs. Symbian OS is proven on several platforms. It started life as the operating system for the Psion series of consumer PDA products (including Series 5mx, Revo and net Book), and various adaptations by Diamond, Oregon Scientific and Ericsson. The first dedicated mobile phone incorporating Symbian OS was the Ericsson R380 Smart phone, which incorporated a flip-open keypad to reveal a touch screen display and several connected applications. Most recently available is the Nokia 9210 Communicator, a mobile phone that has a QWERTY keyboard and color display, and is fully open to third-party applications written in Java or C++. The five key points - small mobile devices, mass-market, intermittent wireless connectivity, diversity of products and an open platform for independent software developers - are the premises on which Symbian OS was designed and developed. This makes it distinct from any desktop, workstation or server operating system. This also makes Symbian OS different from embedded operating systems, or any of its competitors, which weren’t designed with all these key points in mind. Symbian is committed to open standards. Symbian OS has a POSIX-compliant interface and a Sun-approved JVM, and the company is actively working with emerging standards, such as J2ME, Bluetooth, MMS, SyncML, IPv6 and WCDMA. As well as its own developer support organization, books, papers and courses, Symbian delivers a global network of third-party competency and training centers - the Symbian Competence Centers and Symbian Training Centers. These are specifically directed at enabling other organizations and developers to take part in this new economy. Symbian has announced and implemented a strategy that will see Symbian OS running on many advanced open mobile phones.