Subscribe:

Pages

Friday, July 31, 2009

Boids

Boids, developed by Craig Reynolds in 1986, is an artificial life program, simulating the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name refers to a "bird-like object", but its pronunciation evokes that of "bird" in a stereotypical New York accent.

As with most artificial life simulations, Boids is an example of emergent behavior; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:
separation: steer to avoid crowding local flockmates
alignment: steer towards the average heading of local flockmates
cohesion: steer to move toward the average position of local flockmates

More complex rules can be added, such as obstacle avoidance and goal seeking.

The movement of Boids can be characterized as either chaotic (splitting groups and wild behaviour) or orderly. Unexpected behaviours, such as splitting flocks and reuniting after avoiding obstacles, can be considered emergent.

The boids framework is often used in computer graphics, providing realistic-looking representations of flocks of birds and other creatures, such as schools of fish or herds of animals.

Boids work in a manner similar to cellular automata, since each boid "acts" autonomously and references a neighbourhood, as do cellular automata.

Real-time Transport Protocol

The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over the Internet. It was developed by the Audio-Video Transport Working Group of the IETF and first published in 1996 as RFC 1889, and superseded by RFC 3550 in 2003.

RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications and web-based push to talk features. For these it carries media streams controlled by H.323, MGCP, Megaco, SCCP, or Session Initiation Protocol (SIP) signaling protocols, making it one of the technical foundations of the Voice over IP industry.

RTP is usually used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video) or out-of-band signaling (DTMF), RTCP is used to monitor transmission statistics and quality of service (QoS) information. When both protocols are used in conjunction, RTP is usually originated and received on even port numbers, whereas RTCP uses the next higher odd port number.

Electronic money

Electronic money (also known as e-money, electronic cash, electronic currency, digital money, digital cash or digital currency) refers to money or scrip which is exchanged only electronically. Typically, this involves use of computer networks, the internet and digital stored value systems. Electronic Funds Transfer (EFT) and direct deposit are examples of electronic money. Also, it is a collective term for financial cryptography and technologies enabling it.

While electronic money has been an interesting problem for cryptography (see for example the work of David Chaum and Markus Jakobsson), to date, use of digital cash has been relatively low-scale. One rare success has been Hong Kong's Octopus card system, which started as a transit payment system and has grown into a widely used electronic cash system. Singapore also has an electronic money implementation for its public transportation system (commuter trains, bus, etc), which is very similar to Hong Kong's Octopus card and based on the same type of card (FeliCa). There is also one implementation in the Netherlands, known as Chipknip.

Mathematical markup languages

A mathematical markup language is a computer notation for representing mathematical formulae, based on mathematical notation. Specialized markup languages are necessary because computers normally deal with linear text and more limited character sets (although increasing support for Unicode is obsoleting very simple uses). A formally standardized syntax also allows a computer to interpret otherwise ambiguous content, for rendering or even evaluating. For computer-interpretable syntaxes, the most popular are TeX/LaTeX and MathML.



More details :

See full size image

DecryptingContent-Scrambling System DeCSS

DeCSS is a computer program capable of decrypting content on a DVD-Video disc encrypted using the Content-Scrambling System (CSS).

Origins and history

DeCSS was devised by three people, two of whom remain anonymous. It was released on the Internet mailing list LiViD in October 1999. The one known author of the trio is Norwegian programmer Jon Lech Johansen, whose home was raided in 2000 by Norwegian police. Still a teenager at the time, he was put on trial in a Norwegian court for violating Norwegian Criminal Code section 145[1], and faced a possible jail sentence of two years and large fines, but was acquitted of all charges in early 2003. However, on March 5, 2003, a Norwegian appeals court ruled that Johansen would have to be retried. The court said that arguments filed by the prosecutor and additional evidence merited another trial. On December 22, 2003, the appeals court agreed with the acquittal, and on January 5, 2004, Norway's Økokrim (Economic Crime Unit) decided not to pursue the case further.

The program was first released on October 6, 1999 when Johansen posted an announcement of DeCSS 1.1b, a closed source Windows-only application for DVD ripping, on the livid-dev mailing list. The source code was leaked before the end of the month. The first release of DeCSS was preceded by a few weeks by a program called DoD DVD Speed Ripper from a group called Drink or Die, which didn't include source code and which apparently did not work with all DVDs. Drink or Die reportedly disassembled the object code of the Xing DVD player to obtain a player key. The group that wrote DeCSS, including Johansen, came to call themselves Masters of Reverse Engineering and may have obtained information from Drink or Die.

The CSS decryption source code used in DeCSS was mailed to Derek Fawcus before DeCSS was released. When the DeCSS source code was leaked, Fawcus noticed that DeCSS included his css-auth code in violation of the GNU GPL. When Johansen was made aware of this, he contacted Fawcus to solve the issue and was granted a license to use the code in DeCSS under non-GPL terms.

On January 22, 2004, the DVD CCA dropped the case against Jon Johansen BJ93GJCCMXAR

Simultaneous multithreading

Simultaneous multithreading, often abbreviated as SMT, is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures.

Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors.

Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading. In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executing in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads can be decided by the chip designers, but practical restrictions on chip complexity have limited the number to two for most SMT implementations.

Because the technique is really an efficiency solution and there is inevitable increased conflict on shared resources, measuring or agreeing on the effectiveness of the solution can be difficult. Some researchers have shown that the extra threads can be used to proactively seed a shared resource like a cache, to improve the performance of another single thread, and claim this shows that SMT is not just an efficiency solution. Others use SMT to provide redundant computation, for some level of error detection and recovery.

However, in most current cases, SMT is about hiding memory latency, efficiency and increased throughput of computations per amount of hardware used.

Quantum dot cellular automaton

Quantum Dot Cellular Automata (sometimes referred to simply as quantum cellular automata, or QCA) — Any device designed to represent data and perform computation, regardless of the physics principles it exploits and materials used to build it, must have two fundamental properties: distinguishability and conditional change of state, the latter implying the former. This means that such a device must have barriers that make it possible to distinguish between states, and that it must have the ability to control these barriers to perform conditional change of state. For example, in a digital electronic system, transistors play the role of such controllable energy barriers, making it extremely practical to perform computing with them.

Cellular automata

A cellular automaton (CA) is an abstract system consisting of a uniform (finite or infinite) grid of cells. Each one of these cells can only be in one of a finite number of states at a discrete time. The state of each cell in this grid is determined by the state of its adjacent cells, also called the cell's "neighborhood." The most popular example of a cellular automaton was presented by John Horton Conway in 1970, which he named "The Game of Life."

MPEG-7

MPEG-7 is a multimedia content description standard. This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user. MPEG-7 is formally called Multimedia Content Description Interface. Thus, it is not a standard which deals with the actual encoding of moving pictures and audio, like MPEG-1, MPEG-2 and MPEG-4. It uses XML to store metadata, and can be attached to timecode in order to tag particular events, or synchronise lyrics to a song, for example.

It was designed to standardize:
a set of Description Schemes (short DS in the standard) and Descriptors (short D in the standard)
a language to specify these schemes, called the Description Definition Language (short DDL in the standard)
a scheme for coding the description

The combination of MPEG-4 and MPEG-7 has been referred to as MPEG-47.

MPEG-7 objectives
Provide a fast and efficient searching, filtering and content identification method.
Describe main issues about the content (low-level characteristics, structure, models, collections, etc.).
Index a big range of applications.
Audiovisual information that MPEG-7 deals is : Audio, voice, video, images, graphs and 3D models
Inform about how objects are combined in a scene.
Independence between description and the information itself.


MPEG-7 applications

There are many applications and application domains which will benefit from the MPEG-7 standard. A few application examples are:
Digital library: Image/video catalogue, musical dictionary.
Multimedia directory services: e.g. yellow pages.
Broadcast media selection: Radio channel, TV channel.
Multimedia editing: Personalized electronic news service, media authoring.
Security services: Traffic control, production chains...
E-business: Searching process of products.
Cultural services: Art-galleries, museums...
Educational applications.
Biomedical applications.

Interactive Television

Interactive television (generally known as iTV) describes a number of techniques that allow viewers to interact with television content as they view it.
Interactive television represents a continuum from low interactivity (TV on/off, volume, changing channels) to moderate interactivity (simple movies on demand without player controls) and high interactivity in which, for example, an audience member affects the program being watched. The most obvious example of this would be any kind of real-time voting on the screen, in which audience votes create decisions that are reflected in how the show continues. A return path to the program provider is not necessary to have an interactive program experience. Once a movie is downloaded for example, controls may all be local. The link was needed to download the program, but texts and software which can be executed locally at the set-top box or IRD (Integrated Receiver Decoder) may occur automatically, once the viewer enters the channel.

Friday, July 17, 2009

Signcryption


Abstract:

Signcryption is a new cryptographic primitive, which simultaneously provides both confidentiality and authenticity. Previously, these two goals had been considered separately, with encryption scheme provide confidentiality and digital signature provides authenticity. In cases where both required, the encryption operations and digital signature operations were simply sequentially composed. In 1998, Zheng demonstrated that by combining both goals into a single primitive it is possible to achieve significant savings both in computational and communication overhead. Since a wide variety of signcryption schemes have been proposed. In this seminar we discuss one algorithm for signcryption and advantages and disadvantages of signcryption. Compares signcryption with signature then encryption and application of signcryption.

For Full report:reports4all@rediffmail.com

MAGNETOHYDRODYNAMIC POWER GENERATION TECHNOLOGY (MHD)

The Magnetohydrodynamic power generation technology (MHD ) is the production of electrical power utilising a high temperature conducting plasma moving through an intense magnetic field. The conversion process in MHD was initially described by Michael Faraday in 1893. However the actual utilisation of this concept remained unthinkable. The first known attempt to develop an MHD generator was made at Westing house research laboratory (USA) around 1936. The efficiencies of all modern thermal power generating system lies between 35-40% as they have to reject large quantities of heat to the environment. In all other conventional power plant, first the thermal energy of the gas is directly converted in to electrical energy. Hence it is known as direct energy conversion system. The MHD power plants are classified in to Open and Closed cycle based on the nature of processing of the working fluid. With the present research and development programmes, the MHD power generation may play an important role in the power industry in future to help the present crisis of power. The MHD process can be used not only for commercial power generation but also for so may other applications. The economic attractiveness of MHD for bulk generation of power from fossil fuel has been indicated in many design studies and cost estimates of conceptual plants. MHD promises a dramatic improvement in the cost of generating electricity from coal, beneficial to the growth of the national economy. The extensive use of MHD can help in saving billions of dollars towards fuel prospects of much better fuel utilization are most important, but the potential of lower capital costs with increased utilization of invested capital provides also a very important economic incentive. The beneficial environmental aspects of MHD are probably of equal or even greater significance. The MHD energy conversion process cab contribute greatly to the solution of the serious air and thermal pollution problems faced by all steam - electric power plants while it simultaneously assures better utilization for our natural resources. It can therefore be claimed that the development of MHD for electric utility power generation is an objective of national significance. The high temperature MHD process makes it possible to take advantage of the highest flame temperatures which can be produced by combustion from fossil fuel. While commercial nuclear reactors able to provide heat for MHD have yet to be developed, the combined use to MHD with nuclear heat source holds great promise for the future. In India, coal is by far the most abundant fossil fuel and thus the major energy source for fossil fueled MHD power generation. Before large central station power plants with coal as the energy source can be become commercially viable, further development is necessary.

Sunday, July 12, 2009

VIRTUAL SURGERY

Virtual surgery is a computer based simulated surgery, which can teach the surgeons new procedures and can determine their level of competence before they operate on patients. Virtual surgery is based on the concept of virtual reality.A simulated model of the human autonomy which look, feel and respond like a real human body is created for the surgeon to operate on.The virtual reality simulators consists of force feed back devices,a real time hapticcomputer, haptic software ,dynamic simulator and 3D-graphics.Using the 3D visualization technologies and thehaptic devices a surgery can be performed which enable the surgeon to reach into the virtual patient with their hands to touch, feel, grasp and manipulate the simulated organs.

Microvia Technology

Microvias are small holes in the range of 50 -100 µm. In most cases they are blind vias from the outer layers to the first innerlayer.
The development of very complex Integrated Circuits (ICs) with extremely high input/output counts coupled with the steadily increasing clock rates has forced the electronic manufacturer to develop new packaging and assembly techniques. Components with pitches less then 0.30 mm, chip scale packages, and flip chip technology are underlining this trend and highlight the importance of new printed wiring board technologies able to cope with the requirement of modern electronics.
In addition, more and more electronic devices have to be portable and consequently systems integration, volume and weight considerations are gaining importance.
These portables are usually battery powered resulting in a trend towards lower voltage power supplies, with their implication in PCB (Printed Circuit Board) complexity.
As a result of the above considerations, the future PCB will be characterized by very high interconnection density with finer lines and spaces, smaller holes and decreasing thickness. To gain more landing pads for small footprint components the use of microvias becomes a must.

OFDMA

Orthogonal Frequency Division Multiple Access (OFDMA) is a multiple access scheme for OFDM systems. It works by assigning a subset of subcarriers to individual users.
OFDMA features
OFDMA is the 'multi-user' version of OFDM
Functions by partitioning the resources in the time-frequency space, by assigning units along the OFDM signal index and OFDM sub-carrier index
Each OFDMA user transmits symbols using sub-carriers that remain orthogonal to those of other users
More than one sub-carrier can be assigned to one user to support high rate applications
Allows simultaneous transmission from several users ⇒ better spectral efficiency
Multiuser interference is introduced if there is frequency synchronization error
The term 'OFDMA' is claimed to be a registered trademark by Runcom Technologies Ltd., with various other claimants to the underlying technologies through patents. It is used in the mobility mode of IEEE 802.16 WirelessMAN Air Interface standard, commonly referred to as WiMAX.

Direct sequence code division multiple access (DS-CDMA)

The ordinary CDMA technology has the disadvantage of identifying the users using the appropriate signature of each user. Since many users tend to use the transmission media, there is always a chance for superimposition of signals that can cause interference in network.

The solution for this inconvenience comes in the form of interleaving which separates the users. This special form of CDMA technology offers some of the features of CDMA such as the dynamic channel sharing, mitigation of cross cell reference, asynchronous transmission, ease of cell planning and robustness against fading, apart from the low cost interference cancellation technique available for systems with large number of users in multi path channels. Available with the second and third generation mobile phones, the cost per user of this algorithm is independent of the number of users. Giving a better performance along with the simplicity in usage it can maintain it’s low complexity and high performance even in multi path situations too.

Surface conduction Electron emitter Display (SED)

The SED technology has been developing since 1987. The flat panel display technology that employs surface conduction electron emitters for every individual display pixel can be referred to as the Surface-conduction Electron-emitter Display (SED). Though the technology differs, the basic theory that the emitted electrons can excite a phosphor coating on the display panel seems to be the bottom line for both the SED display technology and the traditional cathode ray tube (CRT) televisions.

When bombarded by moderate voltages (tens of volts), the electrons tunnel across a thin slit in the surface conduction electron emitter apparatus. Some of these electrons are then scattered at the receiving pole and are accelerated towards the display surface, between the display panel and the surface conduction electron emitter apparatus, by a large voltage gradient (tens of kV) as these electrons pass the electric poles across the thin slit. These emitted electrons can then excite the phosphor coating on the display panel and the image follows.

The main advantage of SED’s compared with LCD’s and CRT’s is that it can provide with a best mix of both the technologies. The SED can combine the slim form factor of LCD’s with the superior contrast ratios, exceptional response time and can give the better picture quality of the CRT’s. The SED’s also provides with more brightness, color performance, viewing angles and also consumes very less power. More over, the SED’s do not require a deflection system for the electron beam, which has in turn helped the manufacturer to create a display design, that is only few inches thick but still light enough to be hung from the wall. All the above properties has consequently helped the manufacturer to enlarge the size of the display panel just by increasing the number of electron emitters relative to the necessary number of pixels required. Canon and Toshiba are the two major companies working on SED’s. The technology is still developing and we can expect further breakthrough on the research.


More details :

See full size image

Wibree

Wibree is an innovative digital radio technology that can soon become a benchmark for the open wireless communication. Working almost equivalent to the bluethooth technology, this modern technology functions within an ISM band of 2.4 GHz and amid a physical layer bit rate of 1 Mbps.

Widely used in may appliances like the wrist watches, wireless keyboards, toys and sports sensors due to its key feature of very low consumption of power within the prescribed ranges of 10 meters or 30 feet using the low cost transceiver microchips, it can generate an output power ofm-6 dBm.

Conceived by the Nokia company in 10-03-2006, it is today licensed and further researched by some of the major corporates that includes Nordic Semiconductor, Broadcom Corporation, CSR, Epson, Suunto and Taiyo Yuden. According to Bob lannucci, the head of Nokia’s research centre, this groundbreaking technology that is 10 times more capable than the bluethooth technology will soon replace it. Already the corporate giant Nordic Semiconductor is working on the technology so as to bring out the model chips by the mid of 2007.

dynode

A dynode is one of a series of electrodes within a photomultiplier tube. Each dynode is more positively charged than its predecessor. Secondary emission occurs at the surface of each dynode. Such an arrangement is able to amplify the tiny current emitted by the photocathode by typically one million.

Hydrophone

A hydrophone is a sound-to-electricity transducer for use in water or other liquids, analogous to a microphone for air. Note that a hydrophone can sometimes also serve as a projector (emitter), but not all hydrophones have this capability, and may be destroyed if used in such a manner.The first device to be called a 'hydrophone' was developed when the technology matured, and used ultrasonic waves, which would provide for higher overall acoustic output, as well as increasing detection. The ultrasonic waves were produced by a mosaic of thin quartz crystals glued between two steel plates, having a resonant frequency of about 150 kHz. Contemporary hydrophones more often use barium titanate, a piezoelectric ceramic material, giving higher sensitivity than quartz. Hydrophones are an important part of the SONAR system used to detect submarines by both surface vessels and other submarines. A large number of hydrophones were used in the building of various fixed location detection networks such as SOSUS.

Thursday, July 9, 2009

Microbe-Powered 'Fart' Machine Stores Energy

It sounds like a gag gift instead of serious science, but a new electrical farting machine could improve fuel cell technology by turning C02 in the atmosphere into methane.

The technique won't combat global warming directly, since both CO2 and methane are potent greenhouse gases, but it could help store alternative energies such as wind and solar more efficiently.

It works like this: giving small jolts of electricity to single-celled microorganisms known as archea prompts them to remove C02 from the air and turn it into methane, released as tiny "farts." The methane, in turn, can be used to power fuel cells or to store the electrical energy chemically until its needed.

"We found that we can directly convert electrical current into methane using a very specific microorganism," said Bruce Logan, a professor at Pennsylvania State University, who details his discovery in the journal Environmental Science and Technology.

Blue Laser Could Lead to Autism Cure

Lasers could one day cure, or at least aid in the search for drugs that treat diseases ranging from autism to schizophrenia, according to two new studies from the Massachusetts Institute of Technology and Stanford University and published in the online issue of the journal Nature.

A blue laser shined into a live mouse brain triggered gamma waves, which are a kind of brain wave necessary for concentration and cognition that people with autism and schizophrenia often lack.

"There are lots of theories about why [gamma wave oscillation] is impaired," said Li-Huei Tsai, a professor at MIT and a co-author on one of the Nature papers.

"This is the first proof that a specific set of neurons are responsible for gamma waves."

Efficient New Light Unfolds Like Paper

The next time your lamp needs a new light bulb, you might change the lamp shade instead of the light bulb.

New research out of Germany and published in a recent issue of the journal Nature shows that cheap and thin organic light-emitting diodes (OLEDs) can create white light as bright as any compact fluorescent bulb for nearly half the electricity as many compact fluorescent light bulbs.

"This uses cheap, well-known, and well-established materials," said Sebastian Reineke, a coauthor on the paper from the Institut fur Angewandte Photophysik.

"First, we optimized the light that the white OLED emits, and then did some optical tricks to ensure that more of the light was emitted," instead of getting stuck inside the materials themselves.

Human Ear Inspires Universal Radio

TV, radio, GPS, cell phones, wireless Internet, and other electronics all use different radio waves to receive and send information. Now scientists at MIT have created a tiny chip capable of receiving any radio signal, based on the human ear.

The new universal radio could lead to better reception and a new class of electronics that can pick up any radio frequency.

"The human ear is a very good spectrum analyzer," said Rahul Sarpeshkar, a professor at MIT who co-authored the paper in the June issue of the IEEE Journal of Solid-State Circuits. "We copied some of the tricks the ear does, and mapped those onto electronics."

The unique architecture of the human ear allows it to detect a wide range of sounds. A spiral with membranes, fluids, and cilia with different mechanical properties help the ear to separate out each frequency, from 100 hertz up to 10,000 hertz, and transmit that information to the brain.

To detect electromagnetic waves instead of pressure waves the MIT scientists used circuits, in place of cilia. Starting on the outside edge of the 1.5-mm by 3-mm-chip are tiny squares, each one important for processing a different radio-frequency signal.

Single electron tunneling (SET) transistor

The chief problem that are faced by chip designers are regarding the size of the chip. According to Moore’s Law, the number of transistors on a chip will approximately double every 18 to 24 months. Moore\'s Law works largely through shrinking transistors-the circuits that carry electrical signals. By shrinking transistors, designers can squeeze more transistors into a chip. However, more transistors means more electricity and heat compressed into an even smaller space. Furthermore, smaller chips increase performance but also compound the problem of complexity. To solve this problem, the single-electron tunneling transistor(SET) - a device that exploits the quantum effect of tunneling to control and measure the movement of single electron was devised. Experiments have shown that, charge does not flow continuously in these devices but in a quantized way. This paper discusses the principle of operation of SET ,its fabrication and its applications. It also deals with the merits and demerits of SET compared to MOSFET . Although it is unlikely that SETs will replace FETs in conventional electronics, they should prove useful in ultra-low-noise analog applications. Moreover, because it is not affected by the same technological limitations as the FET, the SET can approach closely the quantum limit of sensitivity. It might also be a useful read-out device for a solid-state quantum computer. In future when quantum technology replaces the current computer technology, SET will find immense applications. Single Electron Tunneling transistors (SETs) are three-terminal switching devices that can transfer electrons from source to drain one by one. The structure of SETs is similar to that of FETs. The important difference, however, is that in an SET the channel is separated from source and drain by tunneling junctions, and the role of channel is played by an â€Å“island”. The particular advantage of SET is that they require only one electron to toggle between ON and OFF states. So this transistor will generate much less heat and require less power to move the electrons around - a feature very important in battery-powered mobile devices,such as cell phones.We know that the Pentium chips become much too hot and require massive fans to cool them. This wouldn\'t happen with a Single Electron Transistor, which uses much less energy, so they can be arranged much closer together.

VoIP in Mobile Phones

Today is the world of mobility. Only thing is that is true mobile is Mobile Phoens. Calling from mobile phones are much expencive. Cheapest calling method is pc to pc calling. It wont cost anything because it VoIP. In this semianr we look into implementing VoIP in mobile phones. The diffrent network like GPRS/EDGE, Bluetooth, WiFi etc are common in mobile phone. We look into each and advanteages and disadvantages of each.

DLP Projector

DLP Projector is an optical system driven by digital electronics. Its the only display solution that enables movie video projectors, televisions, home theater systems and business video projectors to create an entirely digital connection between a graphic or video source and the screen in front of you. At the heart of every DLP projection system is an optical semiconductor that manipulates light digitally known as the Digital Micromirror Device, or DLP chip which is a rectangular array of up to 2 million hinge-mounted microscopic mirrors( each of these micro mirrors measures less than one-fifth the width of a human hair. When a DLP chip is coordinated with a digital video or graphic signal, a light source, and a projection lens, its mirrors can reflect an all-digital image onto a screen or other surface. It has three key advantages over existing projection technologies. The digital nature of DLP enables digital gray scale and color reproduction and also positions DLP to be the final link in the digital video infrastructure. Because it is based on the reflective DMD, DLP is more efficient than competing transmissive LCD technologies. Finally, DLP has the ability to create seamless, film like images. DLP makes images look better. You\'ve heard the digital revolution, now see it with Digital Light Processing.

Need more information mail me or download this
http://www.infocomm.org/cps/rde/xbcr/infocomm/ProjectorTechnologyExplained.pdf
http://www.xilinx.com/esp/broadcast/collateral/projectors.pdf

Applications of Majority Gates with Quantum-dot Cellular Automata

Majority gate-based logic is not normally explored with standard CMOS technologies,primarily because of the hardware inefficiencies in creating majority gates.As a result,not much effort has been made towards the optimization of circuits based on majority gates.We are exploring one particular emerging technology,quantum-dot cellular automata(QCA),in which the majority gate is the fundamental logic primitive.One of its main application is a simple and intuitive method for reduction of three-variable Boolean functions into a simplified majority representation.The method is based on Karnaugh maps(K-maps),used for the simplification of Boolean functions. Majority gate logic is expected to find use in quantum-dot cellular automata(QCA),an emerging computational nanotechnology.It is based on a QCA cell composed of four quantum dots arranged in a square pattern.With QCA,the 3-input majority gate forms the fundamental logic primitive.Majority logic is a way of implementing digital operations based on the principles of majority decision.The logic element,a majority gate has an odd number of binary inputs and a binary output.The output is a logical 1 when the majority of inputs is logic 1 and logical 0 when majority of inputs is logic 0.Any digital function can be implemented by a combination of majority gates and binary inverters.Majority logic provides a concise implementation of most digital functions encountered in logic-design applications.

Virtual Keyboard

A virtual keyboard is actually a key-in device, roughly a size of a fountain pen, which uses highly advanced laser technology, to project a full sized keyboard on to a flat surface. Since the invention of computers they had undergone rapid miniaturization. Disks and components grew smaller in size, but only component remained same for decades – its keyboard. Since miniaturization of a traditional keyboard is very difficult we go for virtual keyboard. Here, a camera tracks the finger movements of the typist to get the correct keystroke. A virtual keyboard is a keyboard that a user operates by typing on or within a wireless or optical – detectable surface or area rather than by depressing physical keys.


More details :

See full size image

VOICE MORPHING


Voice morphing means the transition of one speech signal into another. Voice Morphing which is also referred to as voice transformation and voice conversion is a technique to modify a source speaker\'s speech utterance to sound as if it was spoken by a target speaker. Voice morphing is a technique for modifying a source speaker\'s speech to sound as if it was spoken by some designated target speaker. The core process in a voice morphing system is the transformation of the spectral envelope of the source speaker to match that of the target speaker and linear transformations estimated from time-aligned parallel training data are commonly used to achieve this. Speech morphing is analogous to image morphing. In image morphing the in-between images all show one face smoothly changing its shape and texture until it turns into the target face. It is this feature that a speech morph should possess. One speech signal should smoothly change into another, keeping the shared characteristics of the starting and ending signals but smoothly changing the other properties. The major properties of concern as far as a speech signal is concerned are its pitch and envelope information. These two reside in a convolved form in a speech signal. Hence some efficient method for extracting each of these is necessary. We have adopted an uncomplicated approach namely cepstral analysis to do the same. Pitch and formant information in each signal is extracted using the cepstral approach.

More details :
See full size image



Surface-conduction Electron-emitter Display (SED)

Abstract

A Surface-conduction Electron-emitter Display (SED) is a flat panel display technology that uses surface conduction electron emitters for every individual display pixel. The surface conduction electron emitter emits electrons that excite a phosphor coating on the display panel, the same basic concept found in traditional cathode ray tube (CRT) televisions. This means that SEDs can combine the slim form factor of LCDs with the high contrast ratios, refresh rates and overall better picture quality of CRTs and the researches so far claims SED consumes less power than LCD displays. The surface conduction electron emitter apparatus consists of a thin slit across which electrons tunnel when excited by moderate voltages (tens of volts). When the electrons cross electric poles across the thin slit, some are scattered at the receiving pole and are accelerated toward the display surface by a large voltage gradient (tens of kV) between the display panel and the surface conduction electron emitter apparatus. SED displays offer brightness, color performance, and viewing angles on par with CRTs. However, they do not require a deflection system for the electron beam. Engineers as a result can create a display that is just a few inches thick; while still light enough for wall-hanging designs. The manufacturer can enlarge the panel merely by increasing the number of electron emitters relative to the necessary number of pixels. SED technology has been developing since 1987 and Canon and Toshiba are two major companies working on SEDs.

ADVANCED IC PACKAGING TECHNOLOGIES

ABSRACT
ADVANCED IC PACKAGING TECHNOLOGIES

The many different functions of semiconductor devices are made possible by integrated circuits, which are built into the surface of a silicon chip (bear chip) using a complex process. If these chips could be used in unmodified form, packaging would be unnecessary, and the cost of chips reduced. However, because silicon chips are very delicate, even a tiny speck of dust or drop of water can hinder their function. Light can also cause malfunctions. To combat these problems, silicon chips are protected by packaging. There are many different technologies used for packaging components on printed circuit boards. The conventional through –hole technology has now being replaced by a new technology known as Surface Mount Technology (SMT). Surface Mount Technology is a method for constructing electronic circuits in which the components are mounted directly onto the surface of printed circuit boards (PCBs).Electronic devices so made are called Surface Mount Devices (SMDs). Various other technologies like Ball Grid Array(BGA),Flip-Chip technology, Chip-scale technology,Multichip modules are also emerging which are supposed to take over the existing packaging technologies and are can clearly be defined as future technologies in the field of IC packaging.

Thought Translation Device



Abstract



The Thought Translation Device (TTD) is a Brain Computer-Interface (BCI) which successfully enabled totally paralyzed patients to communicate by using their brain potentials only. TTD consists of a training device and spelling program for the completely paralyzed using Slow Cortical brain Potentials (SCP). During the training phase, self-regulation of SCPs is learn through visual-auditory feedback and positive reinforcement of SCPs. During the spelling phase, patients select letters or words with their SCPs. A psychophysiological system for detection of cognitive functioning in completely paralyzed patients is an integral part of the TTD. In its present form, the core of the TTD consists of a single computer program that runs under all MS-Windows versions. This software contains the functions of EEG-acquisition, storage, signal processing, classification, and various application such as spelling.

IDMA - Future of Wireless Technology


Direct-sequence code-division multiple access (DS-CDMA) has been adopted in second and third-generation cellular mobile standards. Users are separated in CDMA system by use of different signatures for ach user. In CDMA system, many users share the transmission media so that signals from different users are superimposed causing interference. This report outlines a multiple access scheme in which interleaving is the only means of user separation. It is a special form of CDMA; it inherits many advantages of CDMA such as dynamic channel sharing, mitigation of cross cell reference, asynchronous transmission, ease of cell planning and robustness against fading. Also a low cost interference cancellation technique is available for systems with large number of users in multipath channels. The normalized cost (per user) of this algorithm is independent of the number of users. Furthermore, such low complexity and high performance attributes can be maintained in a multipath environment. The detection algorithm for IDMA requires less complexity than that of CDMA. The performance is surprisingly good despite its simplicity.

More details :

See full size image

Friday, July 3, 2009

Surround sound system

We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison s invention of the phonograph in 1877 until the 1950s. During those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid- 50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension. In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multi channel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent. The purpose of this seminar is to explore the advances and technologies of surround sound in the consumer market.

Space Mouse

In the future, computation will be human-centered. It will be freely available everywhere, like batteries and power sockets, or oxygen in the air we breathe. It will enter the human world, handling our goals and needs and helping us to do more while doing less. We will not need to carry our own devices around with us. Instead, configurable generic devices, either handheld or embedded in the environment, will bring computation to us, whenever we need it and wherever we might be. As we interact with these “anonymous” devices, they will adopt our information personalities. They will respect our desires for privacy and security.
New systems will boost our productivity. They will help us automate repetitive human tasks, control a wealth of physical devices in the environment, find the information we need (when we need it, without forcing our eyes to examine thousands of search-engine hits), and enable us to work together with other people through space and time.
It must be accessible anywhere. It must adapt to change, both in user
requirements and in operating conditions. It must never shut down or reboot —components may come and go in response to demand, errors, and upgrades, but Oxygen as a whole must be available all the time.


More details :
See full size image

Smart Dust

ABSTRACT

Advances in hardware technology has enabled very compact, autonomous and mobile nodes each having one or more sensors, computation and communication capabilities ,and a power supply. The Smart Dust project is exploring whether an autonomous sensing, computing, and communication system can be packed into a cubic-millimeter mote to form the basis of integrated, massively distributed sensor networks. It focuses on reduction of power consumption, size and cost. To build these small sensors, processors, communication devices, and power supply , designers have used the MEMS (Micro electro mechanical Systems) technology.

Smart Dust nodes otherwise known as “motes” are usually of the size of a grain of sand and each mote consists of :
1. sensors
2. transmitter & receiver enabling bidirectional wireless communication.
3. processors and control circuitory
4. power supply unit

Using smart dust nodes, the energy to acquire and process a sample and then transmit some data about it could be as small as a few nanoJoules.

These dust motes enable a lot of applications, because at these small dimensions ,these motes can be scattered from aircraft for battle field monitoring or can be stirred into house paint to create the ultimate home sensor network.

If you are you interested in this seminar topic, mail to them to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com

* conditions apply

Digital T.V

Broadcasters are concerned with many in-band and out-of-band transmission parameters, including data signal quality, clock tolerance, radiated power tolerance, carrier phase noise, adjacent channel emissions, and precision frequency offset requirements. The FCC permits DTV power-level changes and/or transmitting antenna location and height and beam tilt in the context of de minimise interference levels. The Advanced Television System Committee (ATSC) has provided guidelines for broadcasters in the form of suggested compliance specifications, which will be covered in this paper. On December 24, 1996, the FCC adopted the Advanced Television System Committee (ATSC) system (minus video formats) as the new digital television standard for the U.S. Shortly thereafter, on April 3, 1997, the FCC issued its rules for digital operation as well as its first set of channel allocations, loaning each U.S. broadcaster a second 6 MHz channel for digital television transmission. Subsequently, a revised set of allocations was issued in March 1998 with additional rules and changed rules, including a new transmission emission mask and potential increased transmission power provided new de minimise interference criteria are met. Terrestrial digital (DTV) broadcasting is now underway in the major markets in the United States after the Federal Communications Commission (FCC) in several Reports and Orders set the standard on December 24, 1996, and subsequently released rules of operation and broadcaster channel allocations. DTV broadcasters are mainly concerned about the in band and out of band parameters. The in-band parameters describe the signal quality. The important in-band parameters are spectral shape, data pulse shape, data eye pattern, transmitted power specifications etc. The out of band parameters include rigid TV emission mask, NTSC weighted out of band power, DTV un-weighted out of band power, beam tilt techniques etc.

Organic LED

Organic LED

Scientific research in the area of semiconducting organic materials as the active substance in light emitting diodes (LEDs) has increased immensely during the last four decades. Organic semiconductors was first reported in the 60:s and then the materials where only considered to be merely a scientific curiosity. (They are named organic because they consist primarily of carbon, hydrogen and oxygen.). However when it was recognized in the eighties that many of them are photoconductive under visible light, industrial interests were attracted. Many major electronic companies, such as Philips and Pioneer, are today investing a considerable amount of money in the science of organic electronic and optoelectronic devices. The major reason for the big attention to these devices is that they possibly could be much more efficient than todays components when it comes to power consumption and produced light. Common light emitters today, Light Emitting Diodes (LEDs) and ordinary light bulbs consume more power than organic diodes do. And the strive to decrease power consumption is always something of matter. Other reasons for the industrial attention are i.e. that eventually organic full color displays will replace todays liquid crystal displays (LCDs) used in laptop computers and may even one day replace our ordinary CRT-screens. Organic light-emitting devices (OLEDs) operate on the principle of converting electrical energy into light, a phenomenon known as electroluminescence. They exploit the properties of certain organic materials which emit light when an electric current passes through them. In its simplest form, an OLED consists of a layer of this luminescent material sandwiched between two electrodes. When an electric current is passed between the electrodes, through the organic layer, light is emitted with a color that depends on the particular material used. In order to observe the light emitted by an OLED, at least one of the electrodes must be transparent. When OLEDs are used as pixels in flat panel displays they have some advantages over backlit active-matrix LCD displays - greater viewing angle, lighter weight, and quicker response. Since only the part of the display that is actually lit up consumes power, the most efficient OLEDs available today use less power. Based on these advantages, OLEDs have been proposed for a wide range of display applications including magnified microdisplays, wearable, head-mounted computers, digital cameras, personal digital assistants, smart pagers, virtual reality games, and mobile phones as well as medical, automotive, and other industrial applications.


More details :

See full size image


Still need more information and ppt,doc mail me

Smart sensors

ABSTRACT

Smart sensors are sensors with integrated electronics that can perform one or more of the following function logic functions, two-way communication, make decisions.
The advent of integrated circuits, which became possible because of the tremendous progress in semiconductor technology, resulted in the low cost microprocessor. Thus if it is possible to design a low cost sensor which is silicon based then the overall cost of the control system can be reduced .We can have integrated sensors which has electronics and the transduction element together on one silicon chip. This complete system can be called as system-on-chip .The main aim of integrating the electronics and the sensor is to make an intelligent sensor, which can be called as smart sensor. Smart sensors then have the ability to make some decision. Physically a smart sensor consists of transduction element, signal conditioning electronic and controller/processor that support some intelligence in a single package. In this report the usefulness of silicon technology as a smart sensor, physical phenomena of conversion to electrical output using silicon sensors, characteristics of smart sensors. A general architecture of smart sensor is presented.

If you are you interested in this seminar topic, mail to them to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com

* conditions apply

Wireless LED

Billions of visible LEDs are produced each year, and the emergence of high brightness AlGaAs and AlInGaP devices has given rise to many new markets. The surprising growth of activity in, relatively old, LED technology has been spurred by the introduction of AlInGaP devices. Recently developed AlGaInN materials have led to the improvements in the performance of bluish-green LEDs, which have luminous efficacy peaks much higher than those for incandescent lamps. This advancement has led to the production of large-area full-color outdoors LED displays with diverse industrial applications.




The novel idea of this article is to modulate light waves from visible LEDs for communication purposes. This concurrent use of visible LEDs for simultaneous signaling and communication, called iLight, leads to many new and interesting applications and is based on the idea of fast switching of LEDs and the modulation visible-light waves for free-space communications. The feasibility of such approach has been examined and hardware has been implemented with experimental results. The implementation of an optical link has been carried out using an LED traffic-signal head as a transmitter. Audio messages can be sent using the LED transmitter, and the receiver located at a distance around 20 m away can play back the messages with the speaker. Another prototype that resembles a circular speed-limit sign with a 2-ft diameter was built. The audio signal can be received in open air over a distance of 59.3 m or 194.5 ft. For data transmission, digital data can be sent using the same LED transmitter, and the experiments were setup to send a speed limit or location ID information. The work reported in this article differs from the use of infrared (IR) radiation as a medium for short-range wireless communications. Currently, IR links and local-area networks available. IR transceivers for use as IR data links are widely available in the markets. Some systems are comprised of IR transmitters that convey speech messages to small receivers carried by persons with severe visual impairments. The Talking Signs system is one such IR remote signage system developed at the Smith-Kettlewell Rehabilitation Engineering Research center. It can provide a repeating, directionally selective voice message that originates at a sign. However, there has been very little work on the use of visible light as a communication medium. The availability of high brightness LEDs make the visible-light medium even more feasible for communications. All products with visible-LED components (like an LED traffic signal head) can be turned into an information beacon. This iLight technology has many characteristics that are different from IR. The iLight transceivers make use of the direct line-of-sight (LOS) property of visible light, which is ideal in applications for providing directional guidance to persons with visual impairments. On the other hand, IR has the property of bouncing back and forth in a confined environment. Another advantage of iLight is that the transmitter provides easy targets for LOS reception by the receiver. This is because the LEDs, being on at all times, are also indicators of the location of the transmitter. A user searching for information has only to look for lights from an iLight transmitter. Very often, the device is concurrently used for illumination, display, or visual signage. Hence, there is no need to implement an additional transmitter for information broadcasting. Compared with an IR transmitter, an iLight transmitter has to be concerned with even brightness. There should be no apparent difference to a user on the visible light that emits from an iLight device.

Light Emitting Polymers

Organic light emitting diode (OLED) display technology has been grabbing headlines in recent years. Now one form of OLED displays, LIGHT EMITTING POLYMER (LEP) technology is rapidly emerging as a serious candidate for next generation flat panel displays. LEP technology promises thin, light weight emissive displays with low drive voltage, low power consumption, high contrast, wide viewing angle, and fast switching times.

One of the main attractions of this technology is the compatibility of this technology with plastic-substrates and with a number of printer based fabrication techniques, which offer the possibility of roll-to-roll processing for cost-effective manufacturing.

LEPs are inexpensive and consume much less power than any other flat panel display. Their thin form and flexibility allows devices to be made in any shape. One interesting application of these displays is electronic paper that can be rolled up like newspaper.

Cambridge Display Technology, the UK, is betting that its light weight, ultra thin light emitting polymer displays have the right stuff to finally replace the bulky, space consuming and power-hungry cathode ray tubes (CRTs) used in television screens and computer monitors and become the ubiquitous display medium of the 21st century.

Multimedia messaging Service

A picture says more than a thousand words and is more fun to look at!!! . Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS.




MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text.

Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations.

MMS supports standard image formats such as GIF and JPEG, video formats such as MPEG 4, and audio formats such as MP3, MIDI and WAV, also the new AMR.

The greatest advantage of MMS is its ability to interact with mobile to mobile terminals as well as with mobile to PDA Laptop Internet and other data devices.

MMS can also act as a virtual email client. Greatly anticipated by young users in particular, MMS is projected to fuel the growth of related market segments by as much as forty percent.

A picture says more than a thousand words and is more fun to look at!!! . Everyone in this world believes in this quote. And this is also one of the main quotes that inspired mobile developers who gave this hot technology -MMS.

MMS, Multimedia Messaging Service, is a standardized messaging service. It traced its roots from SMS (Short Messaging Services) and EMS (Enhanced Messaging Services) .MMS will allow users to send and receive messages exploiting the whole array of media types available today, e.g. text, images, audio, and video, text.

Graphics, data, animations, while also making it possible to support new content types as they become popular. With MMS, for example, users could send each other personal pictures together with a voice message, such as a greeting card with a picture, handwritten message, and a personal song or sound clip that has been recorded by the user itself. Video conferencing, which is expected to make a great impact in the future, is also possible with this technology. Using the Wireless Application Protocol (WAP) as bearer technology and powered by the high-speed transmission technologies EDGE, GPRS and UMTS (WCDMA), Multimedia Messaging allows users to send and receive messages that look like PowerPoint-style Presentations.

Face Recognition Technology

Wouldn’t you love to replace password based access control to avoid having to reset forgotten password and worry about the intergrity of your system? Wouldn’t you like to rest secure in comfort that your healthcare system does not merely on your social security number as proof of your identity for granting access to your medical records?

Because each of these questions is becoming more and more important, access to a reliable personal identification is becoming increasingly essential .Conventional method of identification based on possession of ID cards or exclusive knowledge like a social security number or a password are not all together reliable. ID cards can be lost forged or misplaced; passwords can be forgotten or compromised. But a face is undeniably connected to its owner. It cannot be borrowed stolen or easily forged

Blu Ray Disc

Abstract

Blu-ray, also known as Blu-ray Disc (BD) is the name of a next-generation optical disc video recording format jointly developed by nine leading consumer electronics companies. The format was developed to enable recording, rewriting and playback of high-definition video (HDTV). Blu-ray makes it possible to record over 2 hours of digital high-definition video (HDTV) or more than 13 hours of standard-definition video (SDTV/VHS picture quality) on a 27GB disc. There are also plans for higher capacity discs that are expected to hold up to 50GB of data.

The Blu-ray Disc technology can store sound and video while maintaining high quality and also access the stored content in an easy-to-use way. Adoption of the Blu-ray Disc in a variety of applications including PC data storage and high definition video software is being considered

Thin Displays

ABSTRACT

In the Modem era where technology is at high state, need for new machineries and instruments are a prerequisite. Demand for high efficient measuring system and interactive displays make user-friendly capabilities. In the entertainment section high precision imaging is needed for efficient operation.

With advent of OLEDs, conventional LEDs and LCDs are becoming history. High imaging techniques of OLEDs make the critical fields such as defense and research more efficient in operation.

With this stage, the purpose of this seminar is to throw light in to the capabilities of OLEDs and brief study of their technology.

If you are you interested in this seminar topic, mail to them to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com

* conditions apply

Tunable lasers

Tunable lasers

Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time.




While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further.

The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges.

Thermomechanical Data Storage

ABSTRACT

In the future, the current method of magnetically storing data may reach its limit of maximum achievable density. Hence we need a data storage technology which has high storage capacity and is small in size. The solution is Thermomechanical data storage. Thermomechanical Data Storage is a data storage scheme in which nanometer sized pits on a plastic disc represent digital data. This data storage concept combines ultrahigh density, terabit capacity, small form factor and high data rates. By using this concept, we will be able to store the equivalent of 25 DVDs on a surface the size of a postage stamp. IBM scientists have demonstrated a data storage density of a trillion bits per square inch – 20 times higher than the densest magnetic storage available today. IBM achieved this remarkable density — enough to store 25 million printed textbook pages on a surface the size of a postage stamp — in a research project code-named “Millipede”. Millipede uses thousands of nano-sharp tips to punch indentations representing individual bits into a thin plastic film. The result is akin to a nanotech version of the venerable data processing ‘punch card’ developed more than 110 years ago, but with two crucial differences: the ‘Millipede’ technology is re-writeable, and may be able to store more than 3 billion bits its of data in the space occupied by just one hole in a standard punch card.

If you are you interested in this seminar topic, mail to them to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com

* conditions apply

Protein Memories for Computers

ABSTRACT

The world’s most advanced super computer doesn’t require a single semiconductor chip.
The human brain consists of organic molecules that combines to form a highly sophisticated network able to calculate, perceive, manipulate, self-repair, think and feel. Digital computers can certainly perform calculations much faster and more precisely than humans, but even simple organisms are superior to computers in the other five domains. Computer designers may never be able to make machines having all the facilities of natural brain,but we can exploit some special properties of biological molecular-particularly proteins-to build computer components that are faster ,smaller and more powerful than any electronic devices .
Devices fabricated from biological molecules promise compact size and faster data storage. They lead themselves to use in parallel processing computers,3Dmemories and neural networks.
As the trend towards miniaturization continues, the cost of manufacturing a chip increases considerably. On the other hand ,the use of biological molecules as the active components in a computer circuitry may offer an alternative approach that is more economical.

If you are you interested in this seminar topic, mail to them to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com

FLUORESCENT MULTILAYER DISC (FMD)

FLUORESCENT MULTILAYER DISC (FMD)

The demand for digital storage capacity exceeds a growth of 60% per annum. Facilities like storage area networks, data warehouses, supercomputers and e-commerce related data mining requires much greater capacity to process the volume of data.




Further, with the advent of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years.

This ever increasing capacity demand can only be managed by the steady increase in the areal density of the magnetic and optical recording media. In future,this density increase is possible by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. Today, the optical data storage capacities have been increased by creating double-sided media. This approach for increasing the effective storage capacity is quite unique for optical memory technologies. Fluorescent multilayer disc (FMD) is a three-dimensional storage for large amount of data. This three-dimensional optical storage opens up another dimension of increasing the capacity of a given volume of media, with the objective of achieving a cubic storage element, having the dimensions of writing / reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data.


More details :

See full size image

Thursday, July 2, 2009

Ultra Conductors

1.1 Superconductivity

Superconductivity is the phenomenon in which a material losses all its electrical resistance and allowing electric current to flow without dissipation or loss of energy. The atoms in materials vibrate due to thermal energy contained in the materials: the higher the temperature, the more the atoms vibrate. An ordinary conductor's electrical resistance is caused by these atomic vibrations, which obstruct the movement of the electrons forming the current. If an ordinary conductor were to be cooled to a temperature of absolute zero, atomic vibrations would cease, electrons would flow without obstruction, and electrical resistance would fall to zero. A temperature of absolute zero cannot be achieved in practice, but some materials exhibit superconducting characteristics at higher temperatures.

In 1911, the Dutch physicist Heike Kamerlingh Onnes discovered superconductivity in mercury at a temperature of approximately 4 K (-269o C). Many other superconducting metals and alloys were subsequently discovered but, until 1986, the highest temperature at which superconducting properties were achieved was around 23 K (-250o C) with the niobium-germanium alloy (Nb3Ge)

In 1986 George Bednorz and Alex Muller discovered a metal oxide that exhibited superconductivity at the relatively high temperature of 30 K (-243o C). This led to the discovery of ceramic oxides that super conduct at even higher temperatures. In 1988, and oxide of thallium, calcium, barium and copper (Ti2Ca2Ba2Cu3O10) displayed superconductivity at 125 K (-148o C), and, in 1993 a family based on copper oxide and mercury attained superconductivity at 160 K (-113o C). These "high-temperature" superconductors are all the more noteworthy because ceramics are usually extremely good insulators.

Like ceramics, most organic compounds are strong insulators; however, some organic materials known as organic synthetic metals do display both conductivity and superconductivity. In the early 1990's, one such compound was shown to super conduct at approximately 33 K (-240o C). Although this is well below the temperatures achieved for ceramic oxides, organic superconductors are considered to have great potential for the future.

New superconducting materials are being discovered on a regular basis, and the search is on for room temperature superconductors, which, if discovered, are expected to revolutionize electronics. Room temperature superconductors (ultraconductors) are being developed for commercial applications by Room Temperature Superconductors Inc.(ROOTS).Ultraconductors are the result of more than 16 years of scientific research ,independent laboratory testing and eight years of engineering development. From an engineering perspective, ultraconductors are a fundamentally new and enabling technology. These materials are claimed to conduct electricity at least 100,000 times better than gold, silver or copper.

1.2 Technical introduction

Ultraconductors are patented1 polymers being developed for commercial applications by Room Temperature Superconductors Inc (ROOTS). The materials exhibit a characteristic set of properties including conductivity and current carrying capacity equivalent to superconductor


More details :
See full size image


White LED

Until recently, though, the price of an LED lighting system was too high for most residential use. With sales rising and prices steadily decreasing, it's been said that whoever makes the best white LED will open a goldmine.
White LED lighting has been used for years by the RV and boating crowd, running off direct current (DC) battery systems. It then got popular in off-the-grid houses, powered by photovoltaic cells. It used to be that white LED was possible only by "rainbow" groups of three LEDs -- red, green, and blue -- and controlling the current to each to yield an overall white light. Now a blue indium gallium chip with a phosphor coating is used to create the wave shift necessary to emit white light from a single diode. This process is much less expensive for the amount of light generated.

Each diode is about 1/4 inch and consumes about ten milliamps (a tenth of a watt). Lamps come in various arrangements of diodes on a circuit board. Standard arrays are three, six, 12, or 18 diodes, or custom sizes -- factories can incorporate these into custom-built down lights, sconces and surface-mounted fixtures. With an inexpensive transformer, they run on standard 120-volt alternating current (AC), albeit with a slight (about 15% to 20%) power loss. They are also available as screw-in lamps to replace incandescent. A 1.2 watt white LED light cluster is as bright as a 20-watt incandescent lamp.



Bluetooth Based Smart Sensor Networks

Definition
The communications capability of devices and continuous transparent information routes are indispensable components of future oriented automation concepts. Communication is increasing rapidly in industrial environment even at field level.In any industry the process can be realized through sensors and can be controlled through actuators. The process is monitored on the central control room by getting signals through a pair of wires from each field device in Distributed Control Systems (DCS). With advent in networking concept, the cost of wiring is saved by networking the field devices. But the latest trend is elimination of wires i.e., wireless networks.

Wireless sensor networks - networks of small devices equipped with sensors, microprocessor and wireless communication interfaces.In 1994, Ericsson Mobile communications, the global telecommunication company based in Sweden, initiated a study to investigate, the feasibility of a low power, low cost ratio interface, and to find a way to eliminate cables between devices. Finally, the engineers at the Ericsson named the new wireless technology as "Blue tooth" to honour the 10th century king if Denmark, Harald Blue tooth (940 to 985 A.D).
The goals of blue tooth are unification and harmony as well, specifically enabling different devices to communicate through a commonly accepted standard for wire less connectivity.

BLUE TOOTH
Blue tooth operates in the unlicensed ISM band at 2.4 GHZ frequency band and use frequency hopping spread spectrum technique. A typical Blue tooth device has a range of about 10 meters and can be extended to 100meters. Communication channels supports total bandwidth of 1 Mb / sec. A single connection supports a maximum asymmetric data transfer rate of 721 KBPS maximum of three channels.

BLUE TOOTH - NETWORKS
In bluetooth, a Piconet is a collection of up to 8 devices that frequency hop together. Each Piconet has one master usually a device that initiated establishment of the Piconet, and up to 7 slave devices. Master's Blue tooth address is used for definition of the frequency hopping sequence. Slave devices use the master's clock to synchronize their clocks to be able to hop simultaneously.

A Piconet
When a device wants to establish a Piconet it has to perform inquiry to discover other Blue tooth devices in the range. Inquiry procedure is defined in such a way to ensure that two devices will after some time, visit the same frequency same time when that happens, required information is exchanged and devices can use paging procedure to establish connection.When more than 7 devices needs to communicate, there are two options. The first one is to put one or more devices into the park state. Blue tooth defines three low power modes sniff, hold and park. When a device is in the park mode then it disassociates from and Piconet, but still maintains timing synchronization with it. The master of the Piconet periodically broadcasts beacons (Warning) to invite the slave to rejoin the Piconet or to allow the slave to request to rejoin. The slave can rejoin the Piconet only if there are less than seven slaves already in the Piconet. If not so, the master has to 'park' one of the active slaves first.

All these actions cause delay and for some applications it can be unacceptable for eg: process control applications, that requires immediate response from the command centre (central control room).Scatternet consists of several Piconets connected by devices participating in multiple Piconet. These devices can be slaves in all Piconets or master in one Piconet and slave in other Piconets. Using scatternets higher throughput is available and multi-hop connections between devices in different Piconets are possible. i.e., The unit can communicate in one Piconet at time so they jump from pioneer to another depending upon the channel parameter.

Push Technology

Push technology reverses the Internet's content delivery model. Before push, content publishers had to reply upon the end-users own initiative to bring them to a web site or download content. With push technology the publisher can deliver a content directly to the users PC, thus substantially improving the likelihood that the user will view it. Push content can be extremely timely, and delivered fresh several times a day. Information keeps coming to user whatever he asked for it or not. The most common analog for push technology is a TV channel; it keeps sending us stuff whether we care about it or not.

Push was created to alleviate two problems facing users of net. The first problem is information overload. The volume and dynamic nature of content on the internet is a impediment to users, and has become an ease-of -use of issue. Without push applications can be tedious, time consuming, and less than dependable. Users have to manually hunt down information, search out links, and monitor sites and information sources. Push applications and technology building blocks narrow that focus even further and add considerable ease of use. The second problem is that most end-users are restricted to low bandwidth internet connections, such as 33.3 kbps modems, thus making it difficult to receive multimedia content. Push technology provides means to pre-deliver much larger packages of content.

Push technology enables the delivery of multimedia content on the internet through the use of local storage and transparent content downloads. Like a faithful delivery agent, push, often referred to as broadcasting, delivers content directly to user transparently and automatically. It is one of the internet's most promising technologies.


Already a success, push is being used to pump data in the form of news, current affairs and sports etc, to many computers connected to the internet.Updating software is one of the fastest growing uses of push. It is a new and exciting way to manage software update and upgrade hassles. Using the internet today without the aid of a push application can be a tedious, time consuming, and less than dependable. Computer programming is an inexact art, and there is a huge need to quickly and easily get bug fixes, software updates, and even whole new program out to people. Users have to manually hunt down information, search out links, and monitor sites and information sources.

2. THE PUSH PROCESS

For the end user, the process of receiving push content is quite simple. First, an individual subscribes to a publisher's site or channel by providing the content preferences. The subscriber also sets up a schedule specifying when information should be delivered. Based on the subscriber's schedule, the PC connects to the internet, and the client software notifies the publisher's server that the download can occur. The server collates the content pertaining to the subscriber's profile and downloads it to the subscriber's machine, after which the content is available for the subscriber's viewing

WORKING

Interestingly enough, from a technical point of view, most push applications are pull and just appear to be 'push' to the user. In fact, a more accurate description of this process would be 'automated pull'.

The web currently requires the user to poll sites for new or updated information. This manual polling and downloading process is referred to as 'pull' technology. From a business point of view, this process provides little information about user, and even little control over what information is acquired. It is the user has to keep track of the location of the information sites, and the user has to continuously search for informational changes - a very time consuming process. The 'push' model alleviates much of this tedium.

Virtual Retinal Display


Information displays are the primary medium through which text and images generated by computer and other electronic systems are delivered to end-users. While early computer systems were designed and used for tasks that involved little interactions between the user and the computer, today's graphical and multimedia information and computing environments require information displays that have higher performance, smaller size and lower cost.

The market for display technologies also has been stimulated by the increasing popularity of hand-held computers, personal digital assistants and cellular phones; interest in simulated environments and augmented reality systems; and the recognition that an improved means of connecting people and machines can increase productivity and enhance the enjoyment of electronic entertainment and learning experiences.

For decades, the cathode ray tube has been the dominant display device. The cathode ray tube creates an image by scanning a beam of electrons across a phosphor-coated screen, causing the phosphors to emit visible light. The beam is generated by an electron gun and is passed through a deflection system that scans the beam rapidly left to right and top to bottom, a process called Rastering. A magnetic lens focuses the beam to create a small moving dot on the phosphor screen. It is these rapidly moving spots of light ("pixels") that raster or "paint" the image on the surface of the viewing screen. Flat panel displays are enjoying widespread use in portable computers, calculators and other personal electronics devices. Flat panel displays can consist of hundreds of thousands of pixels, each of which is formed by one or more transistors acting on a crystalline material.

In recent years, as the computer and electronics industries have made substantial advances in miniaturization, manufacturers have sought lighter weight, lower power and more cost-effective displays to enable the development of smaller portable computers and other electronic devices. Flat panel technologies have made meaningful advances in these areas. Both cathode ray tubes and flat panel display technologies, however, pose difficult engineering and fabrication problems for more highly miniaturized, high-resolution displays because of inherent constraints in size, weight, cost and power consumption. In addition, both cathode ray tubes and flat panel display are difficult to see outdoors or in other setting where the ambient light is brighter than the light emitted from the screen. Display mobility is also limited by size, brightness and power consumption.


As display technologies attempt to keep pace with miniaturization and other advances in information delivery systems, conventional cathode ray tube and flat panel technologies will no longer be able to provide an acceptable range of performance characteristics, particularly the combination of high resolution, high level of brightness and low power consumption, required for state-of-the-art mobile computing or personal electronic devices.