Subscribe:

Pages

Wednesday, July 6, 2011

Adaptive Cruise Control

The concept of assisting driver in the task of longitudinal vehicle control is known as cruise control. Starting from the cruise control devices of the seventies and eighties, now the technology has reached cooperative adaptive cruise control. This paper will address the basic concept of adaptive cruise control and the requirement to realize its improved versions including stop and go adaptive cruise control and cooperative adaptive cruise control. The conventional cruise control was capable only to maintain a set speed by accelerating or decelerating the vehicle. Adaptive cruise control devices are capable of assisting the driver to keep a safe distance from the preceding vehicle by controlling the engine throttle and brake according to the sensor data about the vehicle. Most of the systems use RADAR as the sensor .a few use LIDAR also. Controller includes the digital signal processing modules and microcontroller chips specially designed for actuating throttle and brake. The stop and go cruise control is for the slow and congested traffic of the cities where the traffic may be frequently stopped. Cooperative controllers are not yet released but postulations are already there. This paper includes a brief theory of pulse Doppler radar and FM-CW LIDAR used as sensors and the basic concept of the controller.

Wednesday, January 12, 2011

Fractal image compression:Recent technique based on the representation of an image

Storing an image on a computer requires a very large memory. This problem can be averted by the use of various image compression techniques. Most images contain some amount of redundancy that can be removed when the image is stored and then replaced when it is reconstructed.

Fractal image compression is a recent technique based on the representation of an image. The self-transformability property of an image is assumed and exploited in fractal coding. It provides high compression ratios and fast decoding. Apart from this it is also simple and is an easily executable technique.

PERCEPTIVE COMPUTING:A computer with perceptual capabilities

Survival of animal depends highly on developed sensory abilities. Like wise human recognition depends on highly developed abilities to perceive, integrate, and interpret visual, auditory, and touch information. Also we have no doubt that if computers had even a small fraction of the perceptual ability of animals or humans, then they would be much more powerfull.Adding such perceptual abilities to computers would enable computers and humans to work together more as partners. Perceptive Computing (Blue Eyes) project aims at creating computational devices with the sort of perceptual abilities that people take for granted. Blue Eyes uses non-obtrusive sensing technology

Computer Clothing:Digital clothes that able to perform some of the PC functions

There is a major movement going on in the electronics and computer industries to develop wearable devices for what’s being called Post-PC era. We are now at the dawn of that era and some of these devices are already making their way to the consumer market .Computerized clothes will be the next step in making computers and devices portable without having to strap electronics into our body. These digital clothes will able to perform some of the PC functions. These devices are small in size and portable. This apparel can be used to read our heart rate and breathing. The LED monitors could even be integrated into this apparel to display text and images.

microprocessor based auto synchronization

The manual method of synchronization demands a skilled operator and the method is suitable for no load operation or normal frequency condition. under emergency condition such as lowering of frequency or synchronizing of large machines a very fast action is needed, which may not be possible for a human operator. Thus there is a need of autosynchroniser in a power station or in an industrial establishment where generator are employed. This paper describes a microprocessor based set up for synchronizing a three phase alternator to a busbar. Also existing methods of synchronization are mentioned.

Wavelet Transforms:one of the important signal processing developments in the last decade

Wavelet transforms have been one of the important signal processing developments in the last decade, especially for the applications such as time-frequency analysis, data compression, segmentation and vision. During the past decade, several efficient implementations of wavelet transforms have been derived. The theory of wavelets has roots in quantum mechanics and the theory of functions though a unifying framework is a recent occurrence. Wavelet analysis is performed using a prototype function called a wavelet. Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis functions. These basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or contractions (scaling) and translations (shifts). Efficient implementation of the wavelet transforms has been derived based on the Fast Fourier transform and short-length ‘fast-running FIR algorithms’ in order to reduce the computational complexity per computed coefficient.

Energy transmission system for an artificial heart- leakage inductance compensation

A power supply system using a transcutaneous transformer to power an artificial heart through intact skin has been designed. In order to realize both high-voltage gain and minimum circulating current, compensation of leakage inductances on both sides of a transcutaneous transformer is proposed. A frequency region which realizes the robustness against coupling coefficient and load variation is identified. In this region, the converter has inherent advantages such as zerovoltage switching (ZVS) or zero-current switching (ZCS) of the switches, high-voltage gain, minimum circulating current, and high efficiency.

Artificial heart, energy transmission system, high efficiency, high-frequency converter, high-power density, high-voltage gain, inductance compensation, soft-switched converter, transcutaneous transformer, zero-current switching (ZCS), zero-voltage switching (ZVS).

Wednesday, November 10, 2010

Integrated Gate Commutated Thyristor (IGCT)

The Integrated Gate Commutated Thyristor (IGCT) combines the advantages of the hard driven GTO thyristor, including its dramatically improved turn- off performance, with technological breakthroughs at the device, gate-drive and application levels. Homogenous switching area of the IGCT up to the dynamic avalanche limits. Snubber circuits are no longer needed. Improved loss characteristics allow high frequency applications extending into the kHz range. A new IGCT device family with integrated high- power diodes has been developed for applications in the 0.5-6 MVA range, extending to several 100 MVA with series and parallel connections. A first 100 MVA inverter based on the IGCT has been in commercial operation and confirms the very high level of reliability of this new technology. Other new application using the IGCT platform includes ABB’s new ACS1000 drive for medium voltage applications.

BiCMOS silicon technology:Electronics Seminar

The need for high-performance, low-power, and low-cost systems for network transport and wireless communications is driving silicon technology toward higher speed, higher integration, and more functionality. Further more, this integration of RF and analog mixed-signal circuits into high-performance digital signal-processing (DSP) systems must be done with minimum cost overhead to be commercially viable. While some analog and RF designs have been attempted in mainstream digital-only complimentary metal-oxide semiconductor (CMOS) technologies, almost all designs that require stringent RF performance use bipolar or semiconductor technology. Silicon integrated circuit (IC) products that, at present, require modern bipolar or BiCMOS silicon technology in wired application space include the essential optical network (SONET) and synchronous digital hierarchy (SDH) operating at 10 Gb/s and higher.

The viability of a mixed digital/analog. RF chip depends on the cost of making the silicon with the required elements; in practice, it must approximate the cost of the CMOS wafer, Cycle times for processing the wafer should not significantly exceed cycle times for a digital CMOS wafer. Yields of the SOC chip must be similar to those of a multi-chip implementation. Much of this article will examine process techniques that achieve the objectives of low cost, rapid cycle time, and solid yield.

Adaptive Piezoelectric energy harvesting circuit

This paper describes an approach to harvesting electrical energy from a mechanically excited piezoelectric element. A vibrating piezoelectric device differs from a typical electrical power source in that it has a capacitive rather than inductive source impedance, and may be driven by mechanical vibrations of varying amplitude. An analytical expression for the optimal power flow from a rectified piezoelectric device is derived, and an “energy harvesting “ circuit is proposed which can achieve this optimal power flow. The harvesting circuit consists of an ac-dc rectifier with an output capacitor, an electrochemical battery, and a switch-mode dc-dc converter that controls the energy flow into the battery. An adaptive control technique for the dc-dc converter is used to continuously implement the optimal power transfer theory and maximize the power stored by the battery. Experimental result reveal that the use of the adaptive dc-dc converter increases power transfer by over 400% as compared to when the dc-dc converter is not used.

Coordinated secondary voltage control to eliminate voltage violation in power system contingencies

In order to achieve more efficient voltage regulation in a power system, coordinated secondary voltage control has been proposed, bringing in the extra benefit of enhancement of power system voltage stability margin. The study is presented by the e.g. with two SVCs and two STATCOMs in order to eliminate voltage violation in systems contingencies. In the paper, it is proposed that the secondary voltage control is implemented by a learning fuzzy logic controller. A key parameter of the controller is trained by P-type learning algorithm via offline simulation with the assistance of injection of artificial loads in controller’s adjacent locations. A multiagent collaboration protocol, which is graphically represented as a finite state machine, is proposed in the paper for the coordination among multiple SVCs and STATCOMs. As an agent, each SVC or STATCOM can provide multilocation coverage to eliminate voltage violation at its adjacent nodes in the power system. Agents can provide collaborative support to each other which is coordinated according to the proposed collaboration protocol.

Molecular Electronics:A new technology competitive to semiconductor technology

Semiconductor integration beyond Ultra Large Scale Integration (ULSI), through conventional electronic technology facing some problems with fundamental physical limitations. Beyond ULSI, a new technology may become competitive to semiconductor technology. This new technology is known is as Molecular Electronics.

Molecular based electronics can overcome the fundamental physical and economic issues limiting Si technology. Here, molecules will be used in place of semiconductor, creating electronic circuit small that their size will be measured in atoms. By using molecular scale technology, we can realize molecular AND gates, OR gates, XOR gates etc.

The dramatic reduction in size, and the sheer enormity of numbers in manufacture, are the principle benefits promised by the field of molecular electronics

Tele-Immersion (TI) :Free full Engineering seminar reort

It is 2010 and you have a very important meeting with your business associates in Chennai. However you have visitors from Japan coming for a mega business deal the same day. Is there any technology by which you can deal with both of them? The answer is yes and the name of that technology is Tele-Immersion. Tele-Immersion is a technology by which you’ll interact instantly with your friend on the other side of the globe through a simulated holographic environment. This technology, which will come along with Internet2, will change the way we work, study and get medical help. It will change the way we live. Tele-Immersion (TI) is defined as the integration of audio and video conferencing, via image-based modeling, with collaborative virtual reality (CVR) in the context of data-mining & significant computation. The 3D effect behind the tele-immersion makes it feel like the real thing. The ultimate goal of TI is not merely to reproduce a real face-to-face meeting in every detail, but to provide the “next generation” interface for collaborators, world-wide, to work together in a virtual environment that is seamlessly enhanced by computation and large databases. When participants are tele-immersed, they are able to see and interact with each other and objects in a shared virtual environment.

Tele-immersion can be of immense use in medical industry and it also finds its application in the field of education

Tuesday, October 26, 2010

Cylinder Deactivation: A fast emerging technology to save fuel

With alternatives to the petrol engine being announced ever so often you could be forgiven for thinking that the old favorite the petrol engine is on its last legs but nothing could be further from the truth and possibilities for developing the petrol engines are endless. One of the most crucial jobs on the agenda is to find ways of reducing fuel consumption, cutting emissions of the green house gas CO2 and also the toxic emissions which threaten air quality. One such fast emerging technology is cylinder deactivation where a number of cylinders are shut down when less is needed to save fuel.
The simple fact is that when you only need small amounts of power such as crawling around town what you really need is a smaller engine. To put it another way an engine performs most efficiently when its working harder so ask it to do the work of an engine half its size and efficiency suffers. Pumping or throttling losses are mostly to blame. Cylinder deactivation is one of the technologies that improve fuel economy, the objective of which is to reduce engine pumping losses under certain vehicle operating conditions.

When a petrol engine is working with the throttle wide open pumping losses are minimal. But at part throttle the engine wastes energy trying to breathe through a restricted airway and the bigger engine, the bigger the problem. Deactivating half the cylinders at part load is much like temporarily fitting a smaller engine.
During World War II, enterprising car owners disconnected a spark plug wire or two in hopes of stretching their precious gasoline ration. Unfortunately, it didn’t improve gas mileage. Nevertheless, Cadillac resurrected the concept out of desperation during the second energy crisis. The “modulated displacement 6.0L V-8- 6-4” introduced in 1981 disabled two, then four cylinders during part-throttle operation to improve the gas mileage of every model in Cadillac’s lineup. A digital dash display reported not only range, average mpg, and instantaneous mpg, but also how many cylinders were operating. Customers enjoyed the mileage boost but not the
side effects. Many of them ordered dealers to cure their Cadillacs of the shakes and stumbles even if that meant disconnecting the modulated-displacement system


Like wide ties, short skirts and $2-per-gallon gas, snoozing cylinders are back. General Motors, the first to show renewed interest in the idea, calls it Displacement on Demand (DoD). DaimlerChrysler, the first manufacturer to hit the U.S. market with a modern cylinder shut-down system calls its approach Multi- Displacement System (MDS). And Honda, who beat everyone to the punch by equipping Japanese-market Inspire models with cylinder deactivation last year, calls the approach Variable Cylinder Management (VCM)
The motivation is the same as before — improved gas mileage. Disabling cylinders finally makes sense because of the strides achieved in electronic power train controls. According to GM, computing power has been increased 50-fold in the past two decades and the memory available for control algorithms is 100 times greater. This time around, manufacturers expect to disable unnecessary cylinders so seamlessly that the driver never knows what’s happening under the hood.

MANUFACTURING THROUGH ELECTRO CHEMICAL MACHINING

ABSTRACT:
The machining of complex shaped designs was difficult earlier, but with the advent of the new machining processes incorporating in it chemical, electrical & mechanical processes manufacturing has redefined itself. This paper intends to deal with one of the revolutionary process called Electro Chemical Machining (ECM).

INTRODUCTION:
Electro chemical machining (ECM) is the controlled removal of metal by anodic dissolution in an electrolytic medium in which the work piece is the anode & the tool is the cathode.
Working: Two electrodes are placed at a distance of about 0.5mm & immersed in an electrolyte, which is a solution of sodium chloride. When an electrical potential of about 20V is applied between the electrodes, the ions existing in the electrodes migrate toward the electrodes.
Positively charged ions are attracted towards the cathode & negatively charged towards the anode. This initiates the flow of current in the electrolyte. The electrolysis process that takes place at the cathode liberates hydroxyl ions & free hydrogen. The hydroxyl ion combines with the metal ions of anode to form insoluble metal hydroxides &the material is thus removed from the anode. This process continues and the tool reproduces its shape in the work piece (anode). The high current densities promote rapid generation of metal hydroxides and gas bubble in the small spacing between the electrodes. These become a barrier to the electrolyzing current after a few seconds. To maintain a continuous high density current, these products have to be removed continuously. This is achieved by circulating the electrolyte at high velocity through the gap between the electrodes. It is also to be noted that the machining gap size increases. Therefore to maintain a constant gap the cathode should be advanced towards the anode at the same rate at which the material is removed.

Monday, October 25, 2010

CHEMICAL ROCKET ENGINES

Chemical rocket engines, like those on the space shuttle, work by burning two gases to create heat, which causes the gases to expand and exit the engine through a nozzle. In so doing they create the thrust that lifts the shuttle into orbit. Smaller chemical engines are used to change orbits or to keep satellites in a particular orbit. For getting to very distant parts of the solar system chemical engines have the drawback in that it takes an enormous amount of fuel to deliver the payload. Consider the Saturn V rocket that put men on the moon: 5,000,000 pounds of its total take off weight of 6,000,000 pounds was fuel. The problem is that all the energy for chemical engines comes from the energy stored in the propellants.
Electric rocket engines use batteries, solar power, or some other energy source to accelerate and expel charged particles. These rocket engines have extremely high specific impulses, so they are very efficient, but they produce low thrusts. The thrusts that they produce are sufficient only to accelerate small objects, changing the object’s speed by a small amount in the vacuum of space. However, given enough time, these low thrusts can gradually accelerate objects to high speeds. This makes electric propulsion suitable only for travel in space. Because electric rockets are so efficient and produce small thrusts, however, they use very little fuel. Some electric rockets can provide thrust for years, making them ideal for deep-space missions. Satellites or other spacecraft that use electric rockets for propulsion must be first boosted into space by more powerful chemical rockets or launched from a spacecraft.

CARBON NANOTUBES

Carbon Nanotubes -- tiny tubes about 10,000 times thinner than a human hair -- consist of rolled up sheets of carbon hexagons.
HISTORY
Discovered in 1991 by researchers at NEC, they have the potential for use as minuscule wires or in ultrasmall electronic devices.
To build those devices, scientists must be able to manipulate the Nanotubes in a controlled way.
DEVELOPMENT

IBM researchers using an atomic force microscope (AFM), an instrument whose tip can apply accurately measured forces to atoms and molecules, have recently devised a means of changing a nanotube's position, shape and orientation, as well as cutting it.

Continuously variable transmission (CVT):A potential solution to this fuel economy dilemma

After more than a century of research and development, the internal combustion (IC) engine is nearing both perfection and obsolescence: engineers continue to explore the outer limits of IC efficiency and performance, but advancements in fuel economy and emissions have effectively stalled. While many IC vehicles meet Low Emissions Vehicle standards, these will give way to new, stricter government regulations in the very near future. With limited room for improvement, automobile manufacturers have begun full-scale development of alternative power vehicles. Still, manufacturers are loath to scrap a century of development and billions or possibly even trillions of dollars in IC infrastructure, especially for technologies with no history of commercial success. Thus, the ideal interim solution is to further optimize the overall efficiency of IC vehicles.
One potential solution to this fuel economy dilemma is the continuously variable transmission (CVT), an old idea that has only recently become a bastion of hope to automakers. CVTs could potentially allow IC vehicles to meet the first wave of new fuel regulations while development of hybrid electric and fuel cell vehicles continues. Rather than selecting one of four or five gears, a CVT constantly changes its gear ratio to optimize engine efficiency with a perfectly smooth torque-speed curve. This improves both gas mileage and acceleration compared to traditional transmissions.
The fundamental theory behind CVTs has undeniable potential, but lax fuel regulations and booming sales in recent years have given manufacturers a sense of complacency: if consumers are buying millions of cars with conventional transmissions, why spend billions to develop and manufacture CVTs?
Although CVTs have been used in automobiles for decades, limited torque capabilities and questionable reliability have inhibited their growth. Today, however, ongoing CVT research has led to ever-more robust transmissions, and thus ever-more-diverse automotive applications. As CVT development continues, manufacturing costs will be further reduced and performance will continue to increase, which will in turn increase the demand for further development. This cycle of improvement will ultimately give CVTs a solid foundation in the world’s automotive infrastructure.

CRYOGENIC ENGINES :CRYOGENICS- BIRTH OF AN ERA

Cryogenics originated from two Greek words “kyros” which means cold or freezing and “genes” which means born or produced. Cryogenics is the study of very low temperatures or the production of the same. Liquefied gases like liquid nitrogen and liquid oxygen are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest temperatures to be reached. These gases can be stored on large tanks called Dewar tanks, named after James Dewar, who first liquefied hydrogen, or in giant tanks used for commercial applications.

The field of cryogenics advanced when during world war two, when metals were frozen to low temperatures showed more wear resistance. In 1966, a company was formed, called CyroTech, which experimented with the possibility of using cryogenic tempering instead of Heat Treating, for increasing the life of metal tools. The theory was based on the existing theory of heat treating, which was lowering the temperatures to room temperatures from high temperatures and supposing that further descent would allow more strength for further strength increase. Unfortunately for the newly-born industry the results were unstable as the components sometimes experienced thermal shock when cooled too fast. Luckily with the use of applied research and the with the arrival of the modern computer this field has improved significantly, creating more stable results.
Another use of cryogenics is cryogenic fuels. Cryogenic fuels, mainly oxygen and nitrogen have been used as rocket fuels. The Indian Space Research Organisation (ISRO) is set to flight-test the indigenously developed cryogenic engine by early 2006, after the engine passed a 1000 second endurance test in 2003. It will form the final stage of the GSLV for putting it into orbit 36,000 km from earth.
It is also used for making highly sensitive sensors for detecting even the weakest signals reaching us from the stars. Most of these sensors must be cooled well below the room temperature to have the necessary sensitivity, for example, infrared sensors, x-ray spectrometers etc. The High resolution Airborne Widebandwidth Camera, for SOFIA (Stratospheric Observatory For Field Astronomy) which is a Boeing 747 flying observatory, a project of the University Of Chicago, Goddard Space Flight Center and the Rochester Institute Of Technology, which when enters into operation will be the largest infra-red telescope available, is cooled by an adiabatic demagnetization refrigerator operating at a temperature of 0.2K.
Another branch of cryogenics is cryonics, a field devoted to freeze people, which is used to freeze those who die of diseases, that they hope will be curable by the time scientists know how to revive people.

COMMON SYNTHETIC PLASTICS

INRODUCTION
Plastic molecules are made of long chains of repeating units called monomers. The atoms that make up a plastic’s monomers and the arrangement of the monomers within the molecule both determine many of the plastic’s properties. Plastics are one of the classification of polymers .If a polymer is shaped into hard and tough utility articles by the application of heat and pressure ,it is used as “plastic”.

Synthetic polymers are often referred to as "plastics", such as the well-known polyethylene and nylon. However, most of them can be classified in at least three main categories: thermoplastics, thermosets and elastomers.

Man-made polymers are used in a bewildering array of applications: food packaging, films, fibers, tubing, pipes, etc. The personal care industry also uses polymers to aid in texture of products, binding etc.

Examples
A non-exhaustive list of these ubiquitous materials includes:
acrylonitrile butadiene styrene (ABS)
polyamide (PA)
polybutadiene
poly(butylene terephthalate) (PBT)
polycarbonate
poly(ether sulphone) (PES, PES/PEES)
polyethylene (PE)
poly(ethylene glycol) (PEG)
poly(ethylene terephthalate) (PET)
polyimide
polypropylene (PP )
polystyrene (PS)
styrene acrylonitrile (SAN)
polyurethane (PU)
polyvinylchloride (PVC)