Saturday 17 August 2013

Nanosensors Could Aid Drug Manufacturing

Nanosensors Could Aid Drug Manufacturing

Aug. 16, 2013 — MIT chemical engineers have discovered that arrays of billions of nanoscale sensors have unique properties that could help pharmaceutical companies produce drugs -- especially those based on antibodies -- more safely and efficiently.

A micrograph of the nanosensor array. The florescence of each carbon nanotube changes in intensity upon binding to a target molecule. (Credit: Image courtesy of the researchers)
Using these sensors, the researchers were able to characterize variations in the binding strength of antibody drugs, which hold promise for treating cancer and other diseases. They also used the sensors to monitor the structure of antibody molecules, including whether they contain a chain of sugars that interferes with proper function.
"This could help pharmaceutical companies figure out why certain drug formulations work better than others, and may help improve their effectiveness," says Michael Strano, an MIT professor of chemical engineering and senior author of a recent paper describing the sensors in the journal ACS Nano.
The team also demonstrated how nanosensor arrays could be used to determine which cells in a population of genetically engineered, drug-producing cells are the most productive or desirable, Strano says.
Lead author of the paper is Nigel Reuel, a graduate student in Strano's lab. The labs of MIT faculty members Krystyn Van Vliet, Christopher Love and Dane Wittrup also contributed, along with scientists from Novartis.
Testing drug strength
Strano and other scientists have previously shown that tiny, nanometer-sized sensors, such as carbon nanotubes, offer a powerful way to detect minute quantities of a substance. Carbon nanotubes are 50,000 times thinner than a human hair, and they can bind to proteins that recognize a specific target molecule. When the target is present, it alters the fluorescent signal produced by the nanotube in a way that scientists can detect.
Some researchers are trying to exploit large arrays of nanosensors, such as carbon nanotubes or semiconducting nanowires, each customized for a different target molecule, to detect many different targets at once. In the new study, Strano and his colleagues wanted to explore unique properties that emerge from large arrays of sensors that all detect the same thing.
The first feature they discovered, through mathematical modeling and experimentation, is that uniform arrays can measure the distribution in binding strength of complex proteins such as antibodies. Antibodies are naturally occurring molecules that play a key role in the body's ability to recognize and defend against foreign invaders. In recent years, scientists have been developing antibodies to treat disease, particularly cancer. When those antibodies bind to proteins found on cancer cells, they stimulate the body's own immune system to attack the tumor.
For antibody drugs to be effective, they must strongly bind their target. However, the manufacturing process, which relies on nonhuman, engineered cells, does not always generate consistent, uniformly binding batches of antibodies.
Currently, drug companies use time-consuming and expensive analytical processes to test each batch and make sure it meets the regulatory standards for effectiveness. However, the new MIT sensor could make this process much faster, allowing researchers to not only better monitor and control production, but also to fine-tune the manufacturing process to generate a more consistent product.
"You could use the technology to reject batches, but ideally you'd want to use it in your upstream process development to better define culture conditions, so then you wouldn't produce spurious lots," Reuel says.
Measuring weak interactions
Another useful trait of such sensors is their ability to measure very weak binding interactions, which could also help with antibody drug manufacturing.
Antibodies are usually coated with long sugar chains through a process called glycosylation. These sugar chains are necessary for the drugs to be effective, but they are extremely hard to detect because they interact so weakly with other molecules. Drug-manufacturing organisms that synthesize antibodies are also programmed to add sugar chains, but the process is difficult to control and is strongly influenced by the cells' environmental conditions, including temperature and acidity.
Without the appropriate glycosylation, antibodies delivered to a patient may elicit an unwanted immune response or be destroyed by the body's cells, making them useless.
"This has been a problem for pharmaceutical companies and researchers alike, trying to measure glycosylated proteins by recognizing the carbohydrate chain," Strano says. "What a nanosensor array can do is greatly expand the number of opportunities to detect rare binding events. You can measure what you would otherwise not be able to quantify with a single, larger sensor with the same sensitivity."
This tool could help researchers determine the optimal conditions for the correct degree of glycosylation to occur, making it easier to consistently produce effective drugs.
Mapping production
The third property the researchers discovered is the ability to map the production of a molecule of interest. "One of the things you would like to do is find strains of particular organisms that produce the therapeutic that you want," Strano says. "There are lots of ways of doing this, but none of them are easy."
The MIT team found that by growing the cells on a surface coated with an array of nanometer-sized sensors, they could detect the location of the most productive cells. In this study, they looked for an antibody produced by engineered human embryonic kidney cells, but the system could also be tailored to other proteins and organisms.
Once the most productive cells are identified, scientists look for genes that distinguish those cells from the less productive ones and engineer a new strain that is highly productive, Strano says.
The researchers have built a briefcase-sized prototype of their sensor that they plan to test with Novartis, which funded the research along with the National Science Foundation.
"Carbon nanotubes coupled to protein-binding entities are interesting for several areas of bio-manufacturing as they offer great potential for online monitoring of product levels and quality. Our collaboration has shown that carbon nanotube-based fluorescent sensors are applicable for such purposes, and I am eager to follow the maturation of this technology," says Ramon Wahl, an author of the paper and a principal scientist at Novartis.


Teleported by Electronic Circuit: Physicists 'Beam' Information

Teleported by Electronic Circuit: Physicists 'Beam' Information

Aug. 14, 2013 — ETH-researchers cannot "beam" objects or humans of flesh and blood through space yet, a feat sometimes alluded to in science fiction movies. They managed, however, to teleport information from A to B -- for the first time in an electronic circuit, similar to a computer chip.
A flash of light. ETH-researchers cannot "beam" objects or humans of flesh and blood through space yet, a feat sometimes alluded to in science fiction movies. They managed, however, to teleport information from A to B -- for the first time in an electronic circuit, similar to a computer chip. (Credit: © iscatel / Fotolia)
Physicists at ETH Zurich have for the first time successfully teleported information in a so-called solid state system. The researchers did it by using a device similar to a conventional computer chip. The essential difference to a usual computer chip is that the information is not stored and processed based on the laws of classical physics, but on those of quantum physics. In a study, which is published in the current issue of the scientific journal Nature, the researchers were able to teleport information across a distance of about six millimeters, from one corner of a chip to the opposite one. This was shown to be possible without transporting the physical object carrying the information itself from the sender's to the receiver's corner.
"Usually, in telecommunication information is transmitted by electromagnetic pulses. In mobile communications, for example, microwave pulses are used, while in fiber connections it is optical pulses," explains Andreas Wallraff, Professor at the Department of Physics and head of the study. In contrast, quantum teleportation does not transport the information carrier itself, but only the information. This is possible due to the quantum mechanical properties of the system, in particular the entanglement established between the sender and the receiver. For non-physicists, entanglement constitutes a "magic" link between the two parties which exploits the laws of quantum physics.
"Like beaming"
As a prerequisite for quantum teleportation, an entangled state is created between the sender and the receiver. After that the two parties can be physically separated from each other while preserving their shared entangled state. In the present experiment the physicists program a bit of quantum information into their device at the sender. Because the two parties are entangled, this information can be read out at the receiver. "Quantum teleportation is comparable to beaming as shown in the science fiction series Star Trek," says Wallraff. "The information does not travel from point A to point B. Instead, it appears at point B and disappears at point A, when read out at point B."
High data rates
The distance of six millimeters over which the ETH Zurich-researchers have teleported seems to be short in comparison with other teleportation experiments. A year ago, for example, Austrian scientists managed to teleport information by more than one hundred kilometers between the Canary Islands of La Palma and Tenerife. That and other similar experiments, however, were fundamentally different from the one performed at ETH Zurich, because they use visible light in an optical system for teleportation. The ETH-researchers, however, managed to teleport information for the first time in a system which consists of superconducting electronic circuits. "This is interesting, because such circuits are an important element for the construction of future quantum computers," says Wallraff.
Another advantage of the system of the ETH-scientists: It is extremely fast and much faster than most previous teleportation systems. In this system approximately 10,000 quantum bits can be teleported per second. A quantum bit is a unit of quantum information.
"Key future technology"
In a next step, the researchers plan to increase the distance between sender and receiver in their device. The scientists say, they will try to teleport information from one chip to another. And in the long term the goal will be to explore whether quantum communication can be realised over longer distances with electronic circuits, more comparable to those achieved today with optical systems.
"Teleportation is an important future technology in the field of quantum information processing," says Wallraff. For example, it may be possible to transmit information from one location to another one in a future quantum device or processor. Compared to today's information and communication technologies, which are based on classical physics, quantum information processing has the advantage that the information density is much higher: In quantum bits more information can be stored and more efficiently processed than in classical bits.

Quantum Teleportation: Transfer of Flying Quantum Bits at the Touch of a Button

Quantum Teleportation: Transfer of Flying Quantum Bits at the Touch of a Button

Aug. 15, 2013 — By means of the quantum-mechanical entanglement of spatially separated light fields, researchers in Tokyo and Mainz have managed to teleport photonic qubits with extreme reliability. This means that a decisive breakthrough has been achieved some 15 years after the first experiments in the field of optical teleportation. The success of the experiment conducted in Tokyo is attributable to the use of a hybrid technique in which two conceptually different and previously incompatible approaches were combined.
Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. (Credit: Ill./©: University of Tokyo)
"Discrete digital optical quantum information can now be transmitted continuously -- at the touch of a button, if you will," explained Professor Peter van Loock of Johannes Gutenberg University Mainz (JGU). As a theoretical physicist, van Loock advised the experimental physicists in the research team headed by Professor Akira Furusawa of the University of Tokyo on how they could most efficiently perform the teleportation experiment to ultimately verify the success of quantum teleportation. Their findings have now been published in the journal Nature.
Quantum teleportation involves the transfer of arbitrary quantum states from a sender, dubbed Alice, to a spatially distant receiver, named Bob. This requires that Alice and Bob initially share an entangled quantum state across the space in question, e.g., in the form of entangled photons. Quantum teleportation is of fundamental importance to the processing of quantum information (quantum computing) and quantum communication. Photons are especially valued as ideal information carriers for quantum communication since they can be used to transmit signals at the speed of light. A photon can represent a quantum bit or qubit analogous to a binary digit (bit) in standard classical information processing. Such photons are known as 'flying quantum bits'.
The first attempts to teleport single photons or light particles were made by the Austrian physicist Anton Zeilinger. Various other related experiments have been performed in the meantime. However, teleportation of photonic quantum bits using conventional methods proved to have its limitations because of experimental deficiencies and difficulties with fundamental principles.
What makes the experiment in Tokyo so different is the use of a hybrid technique. With its help, a completely deterministic and highly reliable quantum teleportation of photonic qubits has been achieved. The accuracy of the transfer was 79 to 82 percent for four different qubits. In addition, the qubits were teleported much more efficiently than in previous experiments, even at a low degree of entanglement.
Entanglement 'on demand' using squeezed light
The concept of entanglement was first formulated by Erwin Schrödinger and involves a situation in which two quantum systems, such as two light particles for example, are in a joint state, so that their behavior is mutually dependent to a greater extent than is normally (classically) possible. In the Tokyo experiment, continuous entanglement was achieved by means of entangling many photons with many other photons. This meant that the complete amplitudes and phases of two light fields were quantum correlated. Previous experiments only had a single photon entangled with another single photon -- a less efficient solution.
"The entanglement of photons functioned very well in the Tokyo experiment -- practically at the press of a button, as soon as the laser was switched on," said van Loock, Professor for Theory of Quantum Optics and Quantum Information at Mainz University. This continuous entanglement was accomplished with the aid of so-called 'squeezed light', which takes the form of an ellipse in the phase space of the light field. Once entanglement has been achieved, a third light field can be attached to the transmitter. From there, in principle, any state and any number of states can be transmitted to the receiver. "In our experiment, there were precisely four sufficiently representative test states that were transferred from Alice to Bob using entanglement. Thanks to continuous entanglement, it was possible to transmit the photonic qubits in a deterministic fashion to Bob, in other words, in each run," added van Loock.
Earlier attempts to achieve optical teleportation were performed differently and, before now, the concepts used have proved to be incompatible. Although in theory it had already been assumed that the two different strategies, from the discrete and the continuous world, needed to be combined, it represents a technological breakthrough that this has actually now been experimentally demonstrated with the help of the hybrid technique. "The two separate worlds, the discrete and the continuous, are starting to converge," concluded van Loock.

Small-molecule solar cells get 50% increase in efficiency with optical spacer

Small-molecule solar cells get 50% increase in efficiency with optical spacer

Aug 14, 2013
Small-molecule solar cells get 50% increase in efficiency with optical spacer
(Left) A ZnO optical spacer viewed under an atomic force microscope. (Center) The device structure of the small-molecule solar cell. (Right) Solar cell performance with and without an optical spacer. Credit: Aung Ko Ko Kyaw, et al. ©2013 American Chemical Society
In the world of organic solar cells, polymer-based devices may currently be at the top, but other organic materials such as "small molecules" also prove to be promising. Although small-molecule organic solar cells currently have lower efficiencies than polymer solar cells, they are generally easier to fabricate and their efficiencies are improving.
n a new study, researchers have shown that they can increase the efficiency of one type of small-molecule organic solar cell from 6.02% to 8.94% simply by tuning the thickness of the active layer and inserting an optical spacer between the active layer and an electrode. The efficiency improvement demonstrates that small-molecule solar cells have the potential to compete with their polymer counterparts, which have efficiencies approaching 10%.
The researchers, led by Alan J. Heeger at the University of California at Santa Barbara, have published their paper on the efficiency improvement in small-molecule solar cells in a recent issue of Nano Letters.
As the scientists explain in their paper, small-molecule  have several advantages over  solar cells: relatively simple synthesis, high charge , similarly sized particles (monodispersity), and better reproducibility, among others. However, small-molecule solar cells have so far achieved top efficiencies of about 8%, lagging somewhat behind the best polymer devices.
By demonstrating how a few simple changes can increase the efficiency of one type of small-molecule organic solar cell by nearly 50%, the scientists here have shown that these devices still have the potential for vast improvements.
Tuning the thickness of the active layer and inserting a  optical spacer between the active layer and  enable the active layer to harvest more light, increasing . The insertion of the optical spacer places the active layer in a more favorable position within the optical electric field within the cell. As the scientists explained, the optical spacer contributes to increased light absorption in three ways: increasing the charge collection efficiency, serving as a blocking layer for holes, and reducing the recombination rate.
More information: Aung Ko Ko Kyaw, et al. "Improved Light Harvesting and Improved Efficiency by Insertion of an Optical Spacer (ZnO) in Solution-Processed Small-Molecule Solar Cells." Nano LettersDOI: 10.1021/nl401758g

'Listening' to black holes form with gravity waves

'Listening' to black holes form with gravity waves


‘Listening' to black holes form with gravity waves
Gravity waves are ripples in space generated by extreme cosmic events such as colliding stars, black holes, and supernova explosions, which carry vast amounts of energy at the speed of light. Credit: Werner Benger, NASAblueshift
New technology that breaks the quantum measurement barrier has been developed to detect the gravity waves first predicted by Einstein in 1916.
Professor David Blair was one of 800 physicists from around the world who announced a breakthrough in measurement science last month.
"Gravitational wave astronomy is going to be the new astronomy that's likely to really revolutionise our understanding of the universe," he says.
"It will allow us to listen to the big bang and to black holes forming throughout the universe.
"These are detectors that can allow humanity to explore the beginning of time and the end of time."
According to current theory, time began with the big bang and ends in black holes.
Specialised equipment known as the Laser Interferometer Gravitational-Wave Observatory (LIGO) uses  to measure gravitational ripples of space and time.
The detector consists of an L-shaped vacuum system, four kilometres long, with mirrors at the ends.
Lasers directed at the mirrors are isolated from irrelevant vibrations by a vibration .
He says the addition of a new technique called 'quantum squeezing' at the world's largest  allowed researchers to eliminate a lot of the 'noise' caused by .
"The recent announcement is the first implementation in a multi-kilometre detector."
"It proves that the quantum barrier [that] physicists thought would limit sensitivity can be overcome."
The new equipment has allowed the physicists to break the  barrier, defined until recently by Heisenberg's .
"This is a major breakthrough that makes us even more confident that in a few years we will begin to directly measure the ripples in space," he says.
As a result there is no lower limit on the amount of measurable energy, and extremely subtle  will become detectable.
"These instruments represent the pinnacle of technology," he says.
"They've got the most perfect mirrors ever created, they've got the most powerful laser light that's ever been used in any measuring system.
"They've got a vacuum that is so good that the size of any leak would represent less than a teaspoon full of air leaking into it in about 300 years.
"They can measure the smallest amounts of energy that has ever been measured but the new method enables them measure even less.
"The uncertainties from empty space can be suppressed so as to measure something even smaller."
David Blair led a team of 16 physicists in Western Australia along with 800 physicists from around the world who announced the breakthrough in a paper just published in the journal Nature Photonics.

Friday 16 August 2013

Raising the IQ of Smart Windows: Embedded Nanocrystals Provide Selective Control Over Visible Light and Heat-Producing Near-Infrared Light

Raising the IQ of Smart Windows:
Embedded Nanocrystals Provide
Selective Control Over Visible Light
and Heat-Producing Near-Infrared
Light
Aug. 14, 2013 — Researchers at the
U.S. Department of Energy's
Lawrence Berkeley National
Laboratory (Berkeley Lab) have
designed a new material to make
smart windows even smarter. The
material is a thin coating of
nanocrystals embedded in glass that
can dynamically modify sunlight as it
passes through a window. Unlike
existing technologies, the coating
provides selective control over visible
light and heat-producing near-
infrared (NIR) light, so windows can
maximize both energy savings and
occupant comfort in a wide range of
climates.
Share This:
"In the US, we spend about a quarter
of our total energy on lighting,
heating and cooling our buildings,"
says Delia Milliron, a chemist at
Berkeley Lab's Molecular Foundry
who led this research. "When used
as a window coating, our new
material can have a major impact on
building energy efficiency."
Milliron is corresponding author on a
paper describing the results the
journal Nature . The paper is titled,
"Tunable near-infrared and visible
light transmittance in nanocrystal-in-
glass composites," co-authored by
Anna Llordés, Guillermo Garcia, and
Jaume Gazquez.
Milliron's research group is already
well known for their smart-window
technology that blocks NIR without
blocking visible light. The technology
hinges on an electrochromic effect,
where a small jolt of electricity
switches the material between NIR-
transmitting and NIR-blocking states.
This new work takes their approach
to the next level by providing
independent control over both visible
and NIR light. The innovation was
recently recognized with a 2013 R&D
100 Award and the researchers are in
the early stages of commercializing
their technology.
Independent control over NIR light
means that occupants can have
natural lighting indoors without
unwanted thermal gain, reducing the
need for both air-conditioning and
artificial lighting. The same window
can also be switched to a dark mode,
blocking both light and heat, or to a
bright, fully transparent mode.
"We're very excited about the
combination of unique optical
function with the low-cost and
environmentally friendly processing
technique," said Llordés, a project
scientist working with Milliron.
"That's what turns this 'universal
smart window' concept into a
promising competitive technology."
At the heart of their technology is a
new "designer" electrochromic
material, made from nanocrystals of
indium tin oxide embedded in a
glassy matrix of niobium oxide. The
resulting composite material
combines two distinct functionalities
-- one providing control over visible
light and the other, control over NIR
-- but it is more than the sum of its
parts. The researchers found a
synergistic interaction in the tiny
region where glassy matrix meets
nanocrystal that increases the
potency of the electrochromic effect,
which means they can use thinner
coatings without compromising
performance. The key is that the way
atoms connect across the
nanocrystal-glass interface causes a
structural rearrangement in the glass
matrix.
The interaction opens up space inside
the glass, allowing charge to move in
and out more readily. Beyond
electrochromic windows, this
discovery suggests new opportunities
for battery materials where transport
of ions through electrodes can be a
challenge.
"From a materials-design
perspective, we've shown that you
can combine very dissimilar materials
to create new properties that are not
accessible in a homogeneous single
phase material, either amorphous or
crystalline, by taking nanocrystals
and putting them in glass," says
Milliron.
But for Milliron, the research journey
is even more satisfying than either
basic scientific discovery or
technological advances alone.
"The most exciting part has been
taking this project all the way from
synthesizing a new material, to
understanding it in great detail, and
finally to realizing a completely new
functionality that can have a big
impact on technology," says Milliron.
"Taking a materials development
project all the way through that
process is really quite remarkable. It
really speaks to what we can do at
Berkeley Lab, where you have access
to not just the scientific facilities but
also to people who can inform your
perspective."

New research suggests perovskite as cheaper replacement for silicon- based solar panels

New research suggests perovskite
as cheaper replacement for silicon-
based solar panels
(Phys.org) —Researchers at Oxford
Photovoltaics and other companies
investigating the use of perovskite—a
crystalline organometal—as a
replacement for silicon in
photovoltaic cells have created
prototypes that are approximately 15
percent efficient. But this is
apparently just the beginning. Kevin
Bullis suggests in an article published
this week in MIT Technology Review ,
that researchers are predicting
efficiencies as high as 25 percent
very soon, putting the material on a
par with silicon.
Simply meeting the same efficiency
levels as silicon isn't a big deal of
course, other materials have been
found that are capable of doing so as
well, what's newsworthy here is that
using perovskite to make solar cells
would be far cheaper. Not only is it
more readily available, but it doesn't
require as much production cost.
Also, cells that use it would require
far less material. Silicon cells, for
example, typically wind up
approximately 180 micrometers thick.
A comparable cell made using
perovskite on the other hand would
be just 1 micrometer thick.
Perovskite isn't some newly
discovered material—scientists have
known about it for over 170 years.
What's new is that researchers are
only now beginning to fully realize its
potential as a material for use in
solar cell technology. It was only in
2009 that researchers first thought of
using the semiconductor in such cells
—initial testing indicated that it was
only 3.5 percent efficient. Worse, it
didn't hold up for very long when
used. But since that time,
researchers have figured out how to
make it last longer and have
continuously improved its efficiency
to boot.
Current prototypes are made using a
process that involves spraying the
material on a base, which means the
material is far more versatile than
silicon as well. But what really has
researchers exited are expectations
for creating solar panels far more
cheaply than can be done today—
estimates suggest they could cost just
10 to 20 cents per watt, as compared
to 75 cents per watt for traditional
silicon based panels—fossil fuels cost
an average of 50 cents per watts,
suggesting that the use of perovskite
could cause a dramatic shift to solar
power in the future if its efficiency
can be improved as researchers
hope.
More information: Oxford
Photovoltaics: http://
www.oxfordpv.com/photovoltaic-cell-
technology.html

Thursday 15 August 2013

Computer Chip Based On Human Brain Developed

Computer Chip Based On Human
Brain Developed
Aug. 14, 2013 — Today's computing
chips are incredibly complex and
contain billions of nano-scale
transistors, allowing for fast, high-
performance computers, pocket-sized
smartphones that far outpace early
desktop computers, and an explosion
in handheld tablets.
Share This:
Despite their ability to perform
thousands of tasks in the blink of an
eye, none of these devices even
come close to rivaling the computing
capabilities of the human brain. At
least not yet. But a Boise State
University research team could soon
change that.
Electrical and computer engineering
faculty Elisa Barney Smith, Kris
Campbell and Vishal Saxena are
joining forces on a project titled
"CIF: Small: Realizing Chip-scale Bio-
inspired Spiking Neural Networks with
Monolithically Integrated Nano-scale
Memristors."
Team members are experts in
machine learning (artificial
intelligence), integrated circuit design
and memristor devices. Funded by a
three-year, $500,000 National Science
Foundation grant, they have taken on
the challenge of developing a new
kind of computing architecture that
works more like a brain than a
traditional digital computer.
"By mimicking the brain's billions of
interconnections and pattern
recognition capabilities, we may
ultimately introduce a new paradigm
in speed and power, and potentially
enable systems that include the
ability to learn, adapt and respond to
their environment," said Barney
Smith, who is the principal
investigator on the grant.
The project's success rests on a
memristor -- a resistor that can be
programmed to a new resistance by
application of electrical pulses and
remembers its new resistance value
once the power is removed.
Memristors were first hypothesized
to exist in 1972 (in conjunction with
resistors, capacitors and inductors)
but were fully realized as nano-scale
devices only in the last decade.
One of the first memristors was built
in Campbell's Boise State lab, which
has the distinction of being one of
only five or six labs worldwide that
are up to the task.
The team's research builds on recent
work from scientists who have
derived mathematical algorithms to
explain the electrical interaction
between brain synapses and neurons.
"By employing these models in
combination with a new device
technology that exhibits similar
electrical response to the neural
synapses, we will design entirely new
computing chips that mimic how the
brain processes information," said
Barney Smith.
Even better, these new chips will
consume power at an order of
magnitude lower than current
computing processors, despite the
fact that they match existing chips in
physical dimensions. This will open
the door for ultra low-power
electronics intended for applications
with scarce energy resources, such as
in space, environmental sensors or
biomedical implants.
Once the team has successfully built
an artificial neural network, they will
look to engage neurobiologists in
parallel to what they are doing now.
A proposal for that could be written
in the coming year.
Barney Smith said they hope to send
the first of the new neuron chips out
for fabrication within weeks.
This material is based upon work
supported by the National Science
Foundation under Grant No.
CCF-1320987 to Boise State
University. Any opinions, findings,
and conclusions or recommendations
expressed in this material are those
of the authors and do not necessarily
reflect the views of the National
Science Foundation.

Wednesday 14 August 2013

Happy Independence day

Happy Independence day to all
beloved Indians.
First News paper report after
Independence on 15th Aug 1947. Jai
Hind!

Mysterious magnetar boasts one of strongest magnetic fields in Universe

Mysterious magnetar boasts one of
strongest magnetic fields in
Universe
Artist's impression of a magnetar
Credit: ESA/ATG Medialab
(Phys.org) —A team of astronomers
including two researchers from UCL's
Mullard Space Science Laboratory has
made the first ever measurement of
the magnetic field at a specific spot
on the surface of a magnetar.
Magnetars are a type of neutron star,
the dense and compact core of a
giant star which has blasted away its
outer layers in a supernova
explosion.
Magnetars have among the strongest
magnetic fields in the Universe. Until
now, only their large scale magnetic
field had been measured. However,
using a new technique and
observations of a magnetar in X-rays,
the astronomers have now revealed a
strong, localised surface magnetic
field on one.
Magnetars are very puzzling neutron
stars . Astronomers discovered them
through their unusual behaviour
when observed in X-ray wavelengths,
including sudden outbursts of
radiation and occasional giant flares.
These peculiar features of magnetars
are caused by the evolution,
dissipation and decay of their super-
strong magnetic fields, which are
hundreds or thousands of times more
intense than those of the more
common type of neutron stars, the
radio pulsars.
The magnetic field of a magnetar can
have a complex structure. The most
obvious, and easy-to-measure,
component is the large scale external
magnetic field , which is shaped (and
behaves) much like a regular bar
magnet's. This is known as the
dipolar field.
The study was carried out on a
magnetar called SGR 0418+5729. A
few years ago, this star was
discovered to have a relatively gentle
dipolar magnetic field compared to
other magnetars. However, the star
was showing the typical flaring and
bursting activities seen in other
magnetars, leading scientists to
suggest that the star's magnetic
activity might be caused by a field
hidden beneath its surface.
Sometimes, the surface breaks and
the hidden magnetic field leaks out
(artist's impression) Credit: ESA/ATG
Medialab
This new study, based on
observations from ESA's XMM-
Newton X-ray space telescope , has
finally found evidence that SGR
0418+5729 is indeed concealing a
very strong magnetic field in its
interior.
"This magnetar has a strong magnetic
field inside it, but it is hidden
beneath the surface. The only way
you can detect that is to find a flaw
on the surface, where the concealed
magnetic field can leak out," says
Silvia Zane (UCL Mullard Space
Science Laboratory), one of the co-
authors of the study.
Such magnetic leaks would also
explain the outbursts and flares of
radiation observed from magnetars.
The warped magnetic field trapped
inside the star builds up stress below
the magnetar's surface, occasionally
breaking its 'crust' apart and
releasing sudden flashes of X-rays.
Magnetars are far too small – only
around 20km across – and distant for
even the best telescopes to see any
details on their surfaces. They
appear just as dots of light in
astronomers' observations. So the
team had to look for indirect signs of
variation on SGR 0418+5729's
surface. To do this, they measured
how the magnetar's X-ray emissions
varied as the star rotates.
"SGR 0418+5729 rotates once every 9
seconds. We found that at one point
during the rotation, the magnetar's
X-ray brightness drops sharply. That
means something on or near one
part of the surface is absorbing the
radiation," adds Roberto Turolla (an
honorary professor at MSSL and co-
author of the study).
Closeup of the magnetic field leaking
out of a magnetar (artist's
impression) Credit: ESA/ATG Medialab
The team believes that a
concentration of protons over a small
area of the magentar's surface –
perhaps as little as a few hundred
metres across – is absorbing the X-
rays. The protons are confined to a
small volume near the surface by a
strong, localised magnetic field
emerging from the magnetar 's
interior, giving powerful evidence that
a strong and twisted internal
magnetic field lurks beneath the
surface.
"This exciting discovery also confirms
that, in principle, other pulsars with
relatively low external magnetic fields
might conceal a similar strong
magnetic field in the interior. As a
result, many pulsars may switch on
and become active flaring magnetars
for a while, so in the future we may
discover much more magnetars than
what we previously thought. This call
for a major revision of our current
ideas of magnetic field formation and
amplification in neutron stars,"
explains Zane.
The study is published in the journal
Nature .
More information: "A variable
absorption feature in the X-ray
spectrum of a magnetar," by A.
Tiengo et al is published in Nature ,
15 August 2013.
Provided by University College
London

Monday 12 August 2013

Researchers discover a tiny twist in bilayer graphene that may solve a mystery

Researchers discover a tiny twist
in bilayer graphene that may solve
a mystery
The Dirac spectrum of bilayer
graphene when the two layers are
exactly aligned (left) shifts with a
slight interlayer twist that breaks
interlayer-coupling and potential
symmetry, leading to a new spectrum
with surprisingly strong signatures in
ARPES data. Credit: Keun Su Kim,
Fritz Haber Institute
Researchers with the U.S.
Department of Energy (DOE)'s
Lawrence Berkeley National
Laboratory (Berkeley Lab) have
discovered a unique new twist to the
story of graphene, sheets of pure
carbon just one atom thick, and in
the process appear to have solved a
mystery that has held back device
development.
Electrons can race through graphene
at nearly the speed of light – 100
times faster than they move through
silicon. In addition to being superthin
and superfast when it comes to
conducting electrons, graphene is
also superstrong and superflexible,
making it a potential superstar
material in the electronics and
photonics fields, the basis for a host
of devices, starting with ultrafast
transistors. One big problem,
however, has been that graphene's
electron conduction can't be
completely stopped, an essential
requirement for on/off devices.
The on/off problem stems from
monolayers of graphene having no
bandgaps – ranges of energy in which
no electron states can exist. Without
a bandgap , there is no way to control
or modulate electron current and
therefore no way to fully realize the
enormous promise of graphene in
electronic and photonic devices.
Berkeley Lab researchers have been
able to engineer precisely controlled
bandgaps in bilayer graphene through
the application of an external electric
field. However, when devices were
made with these engineered
bandgaps, the devices behaved
strangely, as if conduction in those
bandgaps had not been stopped. Why
such devices did not pan out has
been a scientific mystery until now.
Working at Berkeley Lab's Advanced
Light Source (ALS), a DOE national
user facility, a research team led by
ALS scientist Aaron Bostwick has
discovered that in the stacking of
graphene monolayers subtle
misalignments arise, creating an
almost imperceptible twist in the
final bilayer graphene. Tiny as it is -
as small as 0.1 degree - this twist
can lead to surprisingly strong
changes in the bilayer graphene's
electronic properties .
"The introduction of the twist
generates a completely new
electronic structure in the bilayer
graphene that produces massive and
massless Dirac fermions," says
Bostwick. "The massless Dirac
fermion branch produced by this new
structure prevents bilayer graphene
from becoming fully insulating even
under a very strong electric field. This
explains why bilayer graphene has
not lived up to theoretical predictions
in actual devices that were based on
perfect or untwisted bilayer
graphene."
Bostwick is the corresponding author
of a paper describing this research in
the journal Nature Materials titled
"Coexisting massive and massless
Dirac fermions in symmetry-broken
bilayer graphene." Keun Su Kim of
the Fritz Haber Institute in Berlin is
the lead author Other coauthors are
Andrew Walter, Luca Moreschini,
Thomas Seyller, Karsten Horn, and Eli
Rotenberg, who oversees the
research at ALS Beamline 7.0.1.
Monolayers of graphene have no
bandgaps – ranges of energy in which
no electron states can exist. Without
a bandgap, there is no way to control
or modulate electron current and
therefore no way to fully realize the
enormous promise of graphene in
electronic and photonic devices.
Berkeley Lab researchers have been
able to engineer precisely controlled
bandgaps in bilayer graphene through
the application of an external electric
field. However, when devices were
made with these engineered
bandgaps, the devices behaved
strangely, as if conduction in those
bandgaps had not been stopped.
To get to the bottom of this mystery,
Rotenberg, Bostwick, Kim and their
co-authors performed a series of
angle-resolved photoemission
spectroscopy (ARPES) experiments at
ALS beamline 7.0.1. ARPES is a
technique for studying the electronic
states of a solid material in which a
beam of X-ray photons striking the
material's surface causes the
photoemission of electrons. The
kinetic energy of these
photoelectrons and the angles at
which they are ejected are then
measured to obtain an electronic
spectrum.
"The combination of ARPES and
Beamline 7.0.1 enabled us to easily
identify the electronic spectrum from
the twist in the bilayer graphene,"
says Rotenberg. "The spectrum we
observed was very different from
what has been assumed and contains
extra branches consisting of massless
Dirac fermions. These new massless
Dirac fermions move in a completely
unexpected way governed by the
symmetry twisted layers."
Massless Dirac fermions, electrons
that essentially behave as if they
were photons, are not subject to the
same bandgap constraints as
conventional electrons . In their
Nature Materials paper, the authors
state that the twists that generate
this massless Dirac fermion spectrum
may be nearly inevitable in the
making of bilayer graphene and can
be introduced as a result of only ten
atomic misfits in a square micron of
bilayer graphene.
"Now that we understand the
problem, we can search for
solutions," says lead author Kim.
"For example, we can try to develop
fabrication techniques that minimize
the twist effects, or reduce the size
of the bilayer graphene we make so
that we have a better chance of
producing locally pure material."
Beyond solving a bilayer graphene
mystery, Kim and his colleagues say
the discovery of the twist establishes
a new framework on which various
fundamental properties of bilayer
graphene can be more accurately
predicted.
"A lesson learned here is that even
such a tiny structural distortion of
atomic-scale materials should not be
dismissed in describing the electronic
properties of these materials fully
and accurately," Kim says.
Provided by Lawrence Berkeley
National Laboratory

Electrical Signatures of Consciousness in the Dying Brain

Electrical Signatures of
Consciousness in the Dying Brain
Aug. 12, 2013 — The "near-death
experience" reported by cardiac
arrest survivors worldwide may be
grounded in science, according to
research at the University of Michigan
Health System.
Whether and how the dying brain is
capable of generating conscious
activity has been vigorously debated.
But in this week's PNAS Early Edition ,
a U-M study shows shortly after
clinical death, in which the heart
stops beating and blood stops flowing
to the brain, rats display brain
activity patterns characteristic of
conscious perception.
"This study, performed in animals, is
the first dealing with what happens
to the neurophysiological state of the
dying brain," says lead study author
Jimo Borjigin, Ph.D., associate
professor of molecular and
integrative physiology and associate
professor of neurology at the
University of Michigan Medical
School.
"It will form the foundation for
future human studies investigating
mental experiences occurring in the
dying brain, including seeing light
during cardiac arrest," she says.
Approximately 20 percent of cardiac
arrest survivors report having had a
near-death experience during clinical
death. These visions and perceptions
have been called "realer than real,"
according to previous research, but it
remains unclear whether the brain is
capable of such activity after cardiac
arrest.
"We reasoned that if near-death
experience stems from brain activity,
neural correlates of consciousness
should be identifiable in humans or
animals even after the cessation of
cerebral blood flow," she says.
Researchers analyzed the recordings
of brain activity called
electroencephalograms (EEGs) from
nine anesthetized rats undergoing
experimentally induced cardiac
arrest.
Within the first 30 seconds after
cardiac arrest, all of the rats
displayed a widespread, transient
surge of highly synchronized brain
activity that had features associated
with a highly aroused brain.
Furthermore, the authors observed
nearly identical patterns in the dying
brains of rats undergoing
asphyxiation.
"The prediction that we would find
some signs of conscious activity in
the brain during cardiac arrest was
confirmed with the data," says
Borjigin, who conceived the idea for
the project in 2007 with study co-
author neurologist Michael M. Wang,
M.D., Ph.D., associate professor of
neurology and associate professor of
molecular and integrative physiology
at the U-M.
"But, we were surprised by the high
levels of activity," adds study senior
author anesthesiologist George
Mashour, M.D., Ph.D., assistant
professor of anesthesiology and
neurosurgery at the U-M. " In fact, at
near-death, many known electrical
signatures of consciousness exceeded
levels found in the waking state,
suggesting that the brain is capable
of well-organized electrical activity
during the early stage of clinical
death."
The brain is assumed to be inactive
during cardiac arrest. However the
neurophysiological state of the brain
immediately following cardiac arrest
had not been systemically
investigated until now.
The current study resulted from
collaboration between the labs of
Borjigin and Mashour, with U-M
physicist UnCheol Lee, Ph.D., playing
a critical role in analysis.
"This study tells us that reduction of
oxygen or both oxygen and glucose
during cardiac arrest can stimulate
brain activity that is characteristic of
conscious processing," says Borjigin.
"It also provides the first scientific
framework for the near-death
experiences reported by many cardiac
arrest survivors."

Sunday 11 August 2013

DNA nanorobots find and tag cellular targets

DNA nanorobots find and tag cellular targets

Aug 07, 2013
DNA nanorobots find and tag cellular targets
This graphic shows a molecular robot (automaton) in action. To tag cells (grey circle) that display the Mi, Mj, and Mk receptors, five different components of a molecular robot are deployed. Each of the first three components consists of DNA and an antibody; one antibody binds to each receptor, bringing its DNA (represented by the colored lines) close together on the cell. The fourth DNA component, represented by the single red line, then initiates a chain reaction by pulling the red DNA strand away from the first antibody. That causes the blue DNA strand to change position, followed by the green DNA strand. In the final step, the last antibody pulls a fluorescent DNA strand (labeled F) from the fifth component, completing the action of the robot. Credit: Milan Stojanovic, Ph.D./Columbia University Medical Center

Researchers at Columbia University Medical Center, working with their collaborators at the Hospital for Special Surgery, have created a fleet of molecular "robots" that can home in on specific human cells and mark them for drug therapy or destruction.
The —a collection of DNA molecules, some attached to  —were designed to seek a specific set of human blood cells and attach a fluorescent tag to the cell surfaces. Details of the system were published July 28, 2013, in the online edition of Nature Nanotechnology.
"This opens up the possibility of using such molecules to target, treat, or kill specific cells without affecting similar healthy cells," said the study's senior investigator, Milan Stojanovic, PhD, associate professor of medicine and of  at Columbia University Medical Center. "In our experiment, we tagged the cells with a fluorescent marker; but we could replace that with a drug or with a  to kill the cell."
Though other DNA nanorobots have been designed to deliver drugs to cells, the advantage of Stojanovic's fleet is its ability to distinguish  that do not share a single distinctive feature.
Cells, including cancer cells, rarely possess a single, exclusive feature that sets them apart from all other cells. This makes it difficult to design drugs without side effects. Drugs can be designed to target  with a specific receptor, but healthy cells with the same receptor will also be targeted.
The only way to target cells more precisely is to identify cells based on a collection of features. "If we look for the presence of five, six, or more proteins on the cell surface, we can be more selective," Dr. Stojanovic said. Large cell-sorting machines have the ability to identify cells based on multiple proteins, but until now,  have not had that capability.
Instead of building a single complex molecule to identify multiple features of a , Dr. Stojanovic and his colleagues at Columbia used a different, and potentially easier, approach based on multiple simple molecules, which together form a robot (or automaton, as the authors prefer calling it).
To identify a cell possessing three specific surface proteins, Dr. Stojanovic first constructed three different components for molecular robots. Each component consisted of a piece of double-stranded DNA attached to an antibody specific to one of the surface proteins. When these components are added to a collection of cells, the antibody portions of the robot bind to their respective proteins (in the figure, CD45, CD3, and CD8) and work in concert.
On cells where all three components are attached, a robot is functional and a fourth component (labeled 0 below) initiates a chain reaction among the DNA strands. Each component swaps a strand of DNA with another, until the end of the swap, when the last antibody obtains a strand of DNA that is fluorescently labeled.
At the end of the chain reaction—which takes less than 15 minutes in a sample of human blood—only cells with the three surface proteins are labeled with the fluorescent marker.
"We have demonstrated our concept with blood cells because their surface proteins are well known, but in principle our molecules could be deployed anywhere in the body," Dr. Stojanovic said. In addition, the system can be expanded to identify four, five, or even more surface proteins.
Now the researchers must show that their molecular robots work in a living animal; the next step will be experiments in mice.

Molecules form 2-D patterns never before observed: Experiments produce elusive 5-vertex tilings

Molecules form 2-D patterns never before observed: Experiments produce elusive 5-vertex tilings

Aug 08, 2013,phys.org
Molecules form 2-D patterns never before observed
The 2-D tessellation pattern known as the "semiregular snub square tiling" stands out clearly in this image, which combines scanning tunneling microscopy with computer graphics. The pattern, observed in a surface architecture just one molecule thick, was formed by self-assembly of linear organic linkers, imaged as rods, and lanthanide cerium centers, visualized as bright protrusions. The area shown measures less than 25 nanometers across. Credit: Barth Lab, copyright TUM
Tessellation patterns that have fascinated mathematicians since Johannes Kepler worked out their systematics 400 years ago – and that more recently have caught the eye of both artists and crystallographers – can now be seen in the laboratory. They first took shape on a surface more perfectly two-dimensional than any sheet of writing paper, a single layer of atoms and molecules atop an atomically smooth substrate. Physicists coaxed these so-called Kepler tilings "onto the page" through guided self-assembly of nanostructures.

The experiments were carried out by postdoctoral researcher David Ecija, PhD candidate Jose Ignacio Urgel and colleagues in the Physics Department of Technische Universitaet Muenchen (TUM), in collaboration with scientists in Karlsruhe and Zurich. They reported their findings in the Proceedings of the National Academy of Sciences.
Results open a new line of research
Organic molecules equipped with functional groups to express distinct linkages to  were deposited onto a smooth silver substrate under . Subsequently the organic layer on this platform was exposed to an atomic flux of the lanthanide cerium. At a certain ratio of cerium atoms to molecules, self-assembly produced a symmetrical complex 2-D pattern described originally by Kepler and known today as the snub square tiling. Clearly identifiable through  was a recurring, five-vertex connecting element less than one nanometer across, a cerium-ligand coordination unit.
That the snub square tiling pattern had never been fabricated and seen at the molecular level by exploiting self-assembly protocols was interesting in itself. Beyond that, the physicists explain, every new surface architecture could potentially open the way to novel physics and chemistry, and until now five-vertex structures have proven elusive. In particular, the fact that the lanthanide element cerium played such a key role marks this as the beginning of a new line of research.
This is the first time the TUM researchers – members of Prof. Johannes Barth's Institute for Molecular Nanoscience and Chemical Physics of Interfaces – have coordinated molecules with a lanthanide, and the first time anyone has done this in 2-D. "And lanthanides are special," David Ecija explains. "They have very intriguing optical, magnetic, and chemical properties that could be interesting for nanoscience, and possibly also for nanotechnology. Now we have a new playground for research with the lanthanides, and beyond."

More information: Five-vertex Archimedean surface tessellation by lanthanide-directed molecular self-assembly. David Ecija, Jose I. Urgel, Anthoula C. Papageorgiou, Sushobhan Joshi, Willi Auwaerter, Ari P. Seitsonen, Svetlana Klyatskaya, Mario Ruben, Sybille Fischer, Saranyan Vijayaraghavan, Joachim Reichert, and Johannes V. Barth. PNAS 2013 Vol. 110 No. 17, pp. 6678-6681. DOI: 10.1073/pnas.1222713110
Journal reference: Proceedings of the National Academy of Sciences

Raman pixel by pixel

Raman pixel by pixel

New data processing protocol enables feature-based recognition of Surface-enhanced Raman spectra for intracellular molecule probing of biological targets. It relies on locally detecting the most relevant spectra to retrieve all data independently through indexing.
Raman pixel by pixel
Raman spectroscopy provides molecular specificity through spectrally-resolved measurement of the inelastic scattering under monochromatic . In the context of microscopy, it may serve as label-free , providing structural information. However, the very low cross-section of Raman scattering requires long time exposures, which preclude imaging of  with low concentrations. Surface-enhanced Raman spectroscopy (SERS), which relies on the local  enhancement produced by , is an approach to drastically increase the sensitivity of the Raman detection while retaining large amounts of spectral information. In , the measurement is usually performed on endocytosed nanostructures. However, the measured SERS signals vary strongly as they depend on excitation beam profile, local particle presence or aggregation and local molecular environment. Identifying and extracting spectra corresponding to molecules of interest within a SERS data set is very difficult.
Conventional data analysis methods look for global patterns in the data, whereas the single-molecule sensitivity of SERS can detect independent molecules in each pixel with little correlation between pixels. Nicolas Pavillon and his colleagues from Osaka University now explored different algorithmic methods to automatically discriminate spectra of interest in the measured field of view, without imposing assumptions on the self-similarity of the data. The proposed method relies on the indexing of the positions of relevant spectra, which are selected by the computation of a quality map.
The scientists proposed various criteria to compute spectra extraction, such as the spectral energy, the peak count per spectra, or the projection coefficients on SVD vectors. They assessed each criteria with simulated data and applied this approach to different types of measurements, such as dried Rhodamine 6G adsorbed on gold nanoparticles deposited on a glass substrate, and HeLa cells with endocytosed gold nanoparticles.
The tests with simulated data showed that various criteria can provide satisfactory results. The computation time could be tremendously decreased by discarding irrelevant pixels through a simple criterion based on the spectral energy, reducing the processing time to typically less than 10 seconds for a field of view on the order of 100 X 100 pixels.
The tests performed on Rhodamine 6G measurements demonstrated the validity of the proposed approach, where its known spectrum could be extracted automatically. The peak count criterion was the most suitable for most cases, as it detects various patterns without filtering out any curve which may only appear a single instance in the data set. Such single spectra may be critical important in a given SERS detection experiment. One main feature of the proposed approach is that its output is a localization map of the most relevant spectra in a measurement. The spatial information is retained, making it possible to trace back the positions of several  with identical properties, for instance. The optimized method was utilized to extract and classify the complex SERS response behavior of gold nanoparticles taken in live cells.

More information: N. Pavillon, K. Bando, K. Fujita, N. I. Smith, Feature-based recognition of Surface-enhanced Raman spectra for biological targets, J. Biophotonics 6(8),587-597 (2013); dx.doi.org/10.1002/jbio.201200181


Read more at: http://phys.org

INS Arihant reactor goes critical, submarine to start sea trials

INS Arihant reactor goes critical, submarine to start sea trials


INS Arihant
India's first indigenously constructed nuclear powered submarine achieved a key milestone when its atomic reactor was switched on in Visakhapatnam last night. Top naval sources confirmed that the Arihant's reactor had "gone critical". With this, a key milestone in the submarine's 'harbour acceptance trials' have been completed. "We have waited a very long time for this," a senior naval official said.

A nuclear submarine armed with nuclear-tipped ballistic missiles is the third leg of the 'triad' of land, air and sea launched nuclear weapons envisaged by India's nuclear doctrine of 1998. The navy wants the submarine ready for deterrent patrol, or when it can sail out with nuclear weapons, by 2014.

The next key milestone will be when the 6000-tonne Arihant begins its sea trials and when its commanding officer Captain Sanjay Mahendru signals "underway on nuclear power", to mean that the submarine is sailing out self-powered. This is likely to be as early as next month when the monsoons subside.

Nuclear submarines use miniaturised atomic reactors to generate tremendous heat that boils water into saturated steam. This steam runs its submarine's propulsion and generates electricity. It is near-identical to a steam-powered turbine plant, except it uses nuclear energy.  

Since its launch in Vizag on 26 July 2009, the Arihant has spent over four years in harbour acceptance trials. Steam from the dockyard was generated and pumped into the submarine to test its major machinery and control systems. With the 80 Megawatt nuclear reactor now switched on, the submarine can generate its own power to test its systems.

The Arihant was launched in Vizag on 26 July 2009. Since then, the ballistic missile submarine (SSBN) has spent over four years in harbor trials that are meant to test whether all its machinery and control systems are functional.

During sea trials, the submarine will test all its parameters: maximum diving depth, speeds and sensors. "It's difficult to put a timeline to these trials because they are event-based and not time-based," says veteran submarine Vice Admiral Arun Kumar Singh (retired). "The crew of the Arihant will have to check off literally hundreds of parameters."

The sea trials will include the submerged test-firing of the 'B-05' ballistic missile. The Arihant can carry 12 of these nuclear-tipped missiles, each of which has a range of 700 km. Sections of a second submarine, to be named Aridaman are already at an advanced stage of outfitting at the Ship Building Centre (SBC) in Vizag. Sources indicate the submarine could be launched by next year. Sections of a third submarine are also under construction at the Larsen & Toubro's Hazira facility. The three SSBNs have been under construction under a secret navy-DRDO-Bhabha Atomic Research Centre (BARC) project called the 'Advanced Technology Vessel' (ATV) project. India's strategic plans call for a fleet of five nuclear powered attack submarines (SSN) and five ballistic missile submarines (SSBN), a goal that is unlikely to be achieved before 2025.


Read more at: http://indiatoday.intoday.in