Saturday 30 March 2013

Smoking Immediately Upon Waking May Increase Risk of Lung and Oral Cancer


Smoking Immediately Upon Waking May Increase Risk of Lung and Oral Cancer

Mar. 29, 2013 — The sooner a person smokes a cigarette upon waking in the morning, the more likely he or she is to acquire lung or oral cancer, according to Penn State researchers.
The sooner a person smokes a cigarette upon waking in the morning, the more likely he or she is to acquire lung or oral cancer, according to Penn State researchers. (Credit: Image courtesy of Penn State)
"We found that smokers who consume cigarettes immediately after waking have higher levels of NNAL -- a metabolite of the tobacco-specific carcinogen NNK -- in their blood than smokers who refrain from smoking a half hour or more after waking, regardless of how many cigarettes they smoke per day," said Steven Branstetter, assistant professor of biobehavioral health.
According to Branstetter, other research has shown that NNK (4-(methylnitrosamino)-1-[3-pyridyl]-1-butanone) induces lung tumors in several rodent species. Levels of NNAL (4-(methylnitrosamnino)-1-(3-pyridyl)-1-butanol) in the blood can therefore predict lung cancer risk in rodents as well as in humans. In addition, NNAL levels are stable in smokers over time, and a single measurement can accurately reflect an individual's exposure.
Branstetter and his colleague Joshua Muscat, professor of public health sciences, examined data on 1,945 smoking adult participants from the National Health and Nutrition Examination Survey who had provided urine samples for analysis of NNAL. These participants also had provided information about their smoking behavior, including how soon they typically smoked after waking.
The researchers found that around 32 percent of the participants they examined smoked their first cigarette of the day within 5 minutes of waking; 31 percent smoked within 6 to 30 minutes of waking; 18 percent smoked within 31 to 60 minutes of waking; and 19 percent smoked more than one hour after waking. In addition, the researchers found that the NNAL level in the participants' blood was correlated with the participants' age, the age they started smoking, their gender and whether or not another smoker lived in their home, among other factors.
The team published its results in the March 29 issue of the journal Cancer, Epidemiology, Biomarkers and Prevention.
"Most importantly, we found that NNAL level was highest among people who smoked the soonest upon waking, regardless of the frequency of smoking and other factors that predict NNAL concentrations," Branstetter said. "We believe these people who smoke sooner after waking inhale more deeply and more thoroughly, which could explain the higher levels of NNAL in their blood, as well as their higher risk of developing oral or lung cancer. As a result, time to first cigarette might be an important factor in the identification of high-risk smokers and in the development of interventions targeted toward early-morning smokers."
source:sciencedaily

Head-On Collisions Between DNA-Code Reading Machineries Accelerate Gene Evolution


Head-On Collisions Between DNA-Code Reading Machineries Accelerate Gene Evolution

Mar. 28, 2013 — Bacteria appear to speed up their evolution by positioning specific genes along the route of expected traffic jams in DNA encoding. Certain genes are in prime collision paths for the moving molecular machineries that read the DNA code, as University of Washington scientists explain in this week's edition of Nature.
Houra Merrikh, assistant professor of microbiology, and her student Samuel Million-Weaver, University of Washington, study mechanisms that bacteria use to evolve and adapt. (Credit: Christopher Merrikh)
The spatial-organization tactics their model organism, Bacillus subtilis, takes to evolve and adapt might be imitated in other related Gram-positive bacteria, including harmful, ever-changing germs like staph, strep, and listeria, to strengthen their virulence or cause persistent infections. The researchers think that these mechanisms for accelerating evolution may be found in other living creatures as well.
Replication -- the duplicating of the genetic code to create a new set of genes- and transcription -- the copying of DNA code to produce a protein -- are not separated by time or space in bacteria. Therefore, clashes between these machineries are inevitable. Replication traveling rapidly along a DNA strand can be stalled by a head-on encounter or same-direction brush with slower-moving transcription.
The senior authors of the study, Houra Merrikh, UW assistant professor of microbiology, and Evgeni Sokurenko, UW professor of microbiology, and their research teams are collaborating to understand the evolutionary consequences of these conflicts. The major focus of Merrikh and her research team is on understanding mechanistic and physiological aspects of conflicts in living cells -- including why and how these collisions lead to mutations.
Impediments to replication, they noted, can cause instability within the genome, such as chromosome deletions or rearrangements, or incomplete separation of genetic material during cell division. When dangerous collisions take place, bacteria sometimes employ methods to repair, and then restart, the paused DNA replication, Merrikh discovered in her earlier work at the Massachusetts Institute of Technology.
To avoid unwanted encounters, bacteria orient most of their genes along what is called the leading strand of DNA, rather than the lagging. The terms refer to the direction the encoding activities travel on different forks of the unwinding DNA. Head-on collisions between replication and transcription happen on the lagging strand.
Despite the heightened risk of gene-altering clashes, the study bacteria B. subtilis still orients 25 percent of all its genes, and 6 percent of its essential genes, on the lagging strand.
The scientist observed that genes under the greatest natural selection pressure for amino-acid mutations, a sign of their adaptive significance, were on the lagging strand. Amino acids are the building blocks for proteins. Based on their analysis of mutations on the leading and the lagging strands, the researchers found that the rate of accumulation of mutations was faster in the genes oriented to be subject to head-on replication-transcription conflicts, in contrast to co-directional conflicts.
According to the researchers, together the mutational analyses of the genomes and the experimental findings indicate that head-on conflicts were more likely than same-direction conflicts to cause mutations. They also found that longer genes provided more opportunities for replication-transcription conflicts to occur. Lengthy genes were more prone to mutate.
The researchers noted that head-on replication-transcription encounters, and the subsequent mutations, could significantly increase structural variations in the proteins coded by the affected genes. Some of these chance variations might give the bacteria new options for adapting to changes or stresses in their environment. Like savvy investors, the bacteria appear to protect most of their genetic assets, but offer a few up to the high-roll stakes of mutation.
The researchers pointed out, "A simple switch in gene orientation …could facilitate evolution in specific genes in a targeted way. Investigating the main targets of conflict-mediated formation of mutations is likely to show far-reaching insights into adaptation and evolution of organisms."
source:sciencedaily

Quantum Computing? Physicists' New Technique for Cooling Molecules May Be a Stepping Stone to Quantum Computing


Quantum Computing? Physicists' New Technique for Cooling Molecules May Be a Stepping Stone to Quantum Computing

Mar. 27, 2013 — The next generation of computers promises far greater power and faster processing speeds than today's silicon-based based machines. These "quantum computers" -- so called because they would harness the unique quantum mechanical properties of atomic particles -- could draw their computing power from a collection of super-cooled molecules.
This is Eric Hudson. (Credit: Reed Hutchinson/ University of California - Los Angeles)
But chilling molecules to a fraction of a degree above absolute zero, the temperature at which they can be manipulated to store and transmit data, has proven to be a difficult challenge for scientists.
Now, UCLA physicists have pioneered a new technique that combines two traditional atomic cooling technologies and brings normally springy molecules to a frozen standstill. Their research is published March 28 in the journalNature.
"Scientists have been trying to cool molecules for a decade and have succeeded with only a few special molecules," said Eric Hudson, a UCLA assistant professor of physics and the paper's senior author. "Our technique is a completely different approach to the problem -- it is a lot easier to implement than the other techniques and should work with hundreds of different molecules."
Previous attempts to create ultracold molecules were only effective with one or two specific kinds. Creating a method that can be used with many different molecules would be a major step forward because it is difficult to say which materials might be used in quantum computers or other future applications, Hudson said.
By immersing charged barium chloride molecules in an ultracold cloud of calcium atoms, Hudson and his colleagues are able to prevent most of the molecules from vibrating and rotating. Halting the molecules is a necessary hurdle to overcome before they can be used to store information like a traditional computer does.
"The goal is to build a computer that doesn't work with zeros and ones, but with quantum mechanical objects," Hudson said. "A quantum computer could crack any code created by a classical computer and transmit information perfectly securely."
Hudson's experiment makes molecules extremely cold under highly controlled conditions to reveal the quantum mechanical properties that are hidden under normal circumstances. At room temperature, molecules rocket around, bouncing into each other and exchanging energy. Any information a scientist attempted to store in such a chaotic system would quickly become gibberish.
"We isolate these molecular systems in a vacuum, effectively levitating them in the middle of nothing," Hudson said. "This removes them from the rest of the world that wants to make them classical."
The quantum mechanical world of subatomic particles deviates from the classical world that we observe with the naked eye because according to quantum mechanics, electrons can only exist at specific energy levels. In a quantum computer made of a collection of single atoms, information might be stored by boosting some atomic electrons to higher energy levels while leaving others at lower energy states. However, these atomic energy states are not stable enough to reliably preserve data, Hudson said.
"One of the challenges with atoms is that their energy states are very easily influenced by the outside world," Hudson said. "You make this beautiful quantum state, but then the outside world tries to destroy that information."
Instead of saving data in easily disrupted atomic energy states, a more robust way to store information is in the rotational energy states of molecules, Hudson said. A spinning molecule in the lowest energy rotational state could represent a binary one, while a stationary molecule could represent a binary zero.
Despite applications for quantum computing and other industries, cooling molecules to extremely low temperatures has proved a challenge. Even the simplest molecule composed of only two atoms is a far more complex system than a single atom. Each molecule vibrates and rotates like a miniature whirling slinky, and all of that movement must be stilled so that the molecule can lose energy and cool down.
A new cooling technique
To solve the ultracold molecule conundrum, Hudson and his group first created a floating cloud of calcium atoms corralled by incoming laser beams from all directions. This magneto-optical trap keeps the atoms stationary as it cools them to nearly absolute zero. They then use specialized rods with high, oscillating voltages as part of an ion trap to confine a cloud of positively-charged barium chloride molecules within the ultracold ball of calcium atoms to complete the cooling process.
For the vibrating, energetic molecules to lose heat, they must spend a significant amount of time in contact with the surrounding ultracold atom cloud. Hudson and his colleagues used barium chloride ions, molecules missing one electron, because charged molecules are easier to trap and cool than their neutral counterparts. The use of molecular ions is an essential innovation because previous efforts have demonstrated that neutral molecules ricochet off ultracold atoms without sufficient heat transfer.
"When a molecular ion and a neutral atom get close together they get in tight and bang off each other a bunch before the ion goes away," Hudson said. "When they collide like that it is very easy for the energy in one to go to the other."
While magneto-optical and ion traps are not new to the world of molecular physics, Hudson and his colleagues became the first group to combine these methods to create a cloud of ultracold molecules. This paper is the result of over four years of work spent designing, building, and testing their experiment.
"These two different technologies earned Nobel prizes for the scientists who developed them, but there wasn't really a body of knowledge about how to put these two procedures together," Hudson said.
The research is funded by the Army Research Office and the National Science Foundation.
Other co-authors include former UCLA postdoctoral scholar Wade Rellergert; UCLA graduate students Scott Sullivan, Steven Schowalter and Kuang Chen; and Temple University physics professor Svetlana Kotochigova.
source:sciencedaily

Imaging Methodology Reveals Nano Details Not Seen Before: Understanding Nanoparticles at Atomic Scale in 3-D Could Improve Materials


Imaging Methodology Reveals Nano Details Not Seen Before: Understanding Nanoparticles at Atomic Scale in 3-D Could Improve Materials

Mar. 27, 2013 — A team of scientists from the University of California, Los Angeles (UCLA) and Northwestern University has produced 3-D images and videos of a tiny platinum nanoparticle at atomic resolution that reveal new details of defects in nanomaterials that have not been seen before.
This is a graphic representation of a 3-D atomic resolution screw dislocation in a platinum nanoparticle. (Credit: Chien-Chun Chen and I-Sheng Chou, UCLA)
Prior to this work, scientists only had flat, two-dimensional images with which to view the arrangement of atoms. The new imaging methodology developed at UCLA and Northwestern will enable researchers to learn more about a material and its properties by viewing atoms from different angles and seeing how they are arranged in three dimensions.
The study will be published March 27 by the journal Nature.
The authors describe being able to see how the atoms of a platinum nanoparticle -- only 10 namometers in diameter -- are arranged in three dimensions. They also identify how the atoms are arranged around defects in the platinum nanoparticle.
Similar to how CT scans of the brain and body are done in a hospital, the scientists took images of a platinum nanoparticle from many different directions and then pieced the images together using a new method that improved the quality of the images.
This novel method is a combination of three techniques: scanning transmission electron microscopy, equally sloped tomography (EST) and three-dimensional Fourier filtering. Compared to conventional CT, the combined method produces much higher quality 3-D images and allows the direct visualization of atoms inside the platinum nanoparticle in three dimensions.
"Visualizing the arrangement of atoms in materials has played an important role in the evolution of modern science and technology," said Jianwei (John) Miao, who led the work. He is a professor of physics and astronomy at UCLA and a researcher with the California NanoSystems Institute at UCLA.
"Our method allows the 3-D imaging of the local structures in materials at atomic resolution, and it is expected to find application in materials sciences, nanoscience, solid state physics and chemistry," he said.
"It turns out that there are details we can only see when we can look at materials in three dimensions," said co-author Laurence D. Marks, a professor of materials science and engineering at Northwestern's McCormick School of Engineering and Applied Science.
"We have had suspicions for a long time that there was more going on than we could see from the flat images we had," Marks said. "This work is the first demonstration that this is true at the atomic scale."
Nanotechnology expert Pulickel M. Ajayan, the Benjamin M. and Mary Greenwood Anderson Professor of Engineering at Rice University complimented the research.
"This is the first instance where the three-dimensional structure of dislocations in nanoparticles has been directly revealed at atomic resolution," Ajayan said. "The elegant work demonstrates the power of electron tomography and leads to possibilities of directly correlating the structure of nanoparticles to properties, all in full 3-D view."
Defects can influence many properties of materials, and a technique for visualizing these structures at atomic resolution could lead to new insights beneficial to researchers in a wide range of fields.
"Much of what we know about how materials work, whether it is a catalyst in an automobile exhaust system or the display on a smartphone, has come from electron microscope images of how the atoms are arranged," Marks said. "This new imaging method will open up the atomic world of nanoparticles."
source:sciencedaily

Engineers Enable 'Bulk' Silicon to Emit Visible Light for the First Time


Engineers Enable 'Bulk' Silicon to Emit Visible Light for the First Time

Mar. 27, 2013 — Electronic computing speeds are brushing up against limits imposed by the laws of physics. Photonic computing, where photons replace comparatively slow electrons in representing information, could surpass those limitations, but the components of such computers require semiconductors that can emit light.
White light emission from silicon coupled with nanocavity under laser excitation. (Credit: Image courtesy of University of Pennsylvania)
Now, research from the University of Pennsylvania has enabled "bulk" silicon to emit broad-spectrum, visible light for the first time, opening the possibility of using the element in devices that have both electronic and photonic components.
The research was conducted by associate professor Ritesh Agarwal, postdoctoral fellow Chang-Hee Cho and graduate students Carlos O. Aspetti and Joohee Park, all of the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science.
Their work was published in Nature Photonics.
Certain semiconductors, when imparted with energy, in turn emit light; they directly produce photons, instead of producing heat. This phenomenon is commonplace and used in light-emitting diodes, or LEDs, which are ubiquitous in traffic signals, new types of light bulbs, computer displays and other electronic and optoelectronic devices. Getting the desired photonic properties often means finding the right semiconducting material. Agarwal's group produced the first ever all-optical switch out of cadmium sulfide nanowires, for example.
Semiconducting materials -- especially silicon -- form the backbone of modern electronics and computing, but, unfortunately, silicon is an especially poor emitter of light. It belongs to a group of semiconducting materials, which turns added energy into heat. This makes integrating electronic and photonic circuits a challenge; materials with desirable photonic properties, such as cadmium sulfide, tend to have poor electrical properties and vice versa and are not compatible with silicon-based electronic devices.
"The problem is that electronic devices are made of silicon and photonic devices are typically not," Agarwal said. "Silicon doesn't emit light and the materials that do aren't necessarily the best materials for making electronic devices."
With silicon entrenched as the material of choice for the electronics industry, augmenting its optical properties so it could be integrated into photonic circuitry would make consumer-level applications of the technology more feasible.
"People have tried to solve this problem by doping silicon with other materials, but the light emission is then in the very long wavelength range, so it's not visible and not very efficient and can degrade its electronic properties," Agarwal said. "Another approach is to make silicon devices that are very small, five nanometers in diameter or less. At that size you have quantum confinement effects, which allows the device to emit light, but making electrical connections at that scale isn't currently feasible, and the electrical conductivity would be very low."
To get elemental, "bulk" silicon to emit light, Agarwal's team drew upon previous research they had conducted on plasmonic cavities. In that earlier work, the researchers wrapped a cadmium sulfide nanowire first in a layer of silicon dioxide, essentially glass, and then in a layer of silver. The silver coating supports what are known as surface plasmons, waves that are a combination of oscillating metal electrons and of light. These surface plasmons are highly confined to the surface where the silicon dioxide and silver layers meet. For certain nanowire sizes, the silver coating creates pockets of resonance and hence highly confined electromagnetic fields -- in other words, light -- within the nanostructure.
Normally, after excitation the semiconductor must first "cool down," releasing energy as heat, before "jumping" back to the ground state and finally releasing the remaining energy as light. The Penn team's semiconductor nanowires coupled with plasmonic nanocavities, however, can jump directly from a high-energy excited state to the ground state, all but eliminating the heat-releasing cool-down period. This ultra-fast emission time opens the possibility of producing light from semiconductors such as silicon that might otherwise only produce heat.
"If we can make the carriers recombine immediately," Agarwal said, "then we can produce light in silicon."
In their latest work, the group wrapped pure silicon nanowires in a similar fashion, first with a coating of glass and then one of silver. In this case, however, the silver did not wrap completely around the wire as the researchers first mounted the glass-coated silicon on a sperate pane of glass. Tucking under the curve of the wire but unable to go between it and the glass substrate, the silver coating took on the shape of the greek letter omega -- Ω -- while still acting as a plasmonic cavity.
Critically, the transparent bottom of the omega allowed the researchers to impart energy to the semiconductor with a laser and then examine the light silicon emitted.
Even though the silicon nanowire is excited at a single energy level, which corresponds to the wavelength of the blue laser, it produces white light that spans the visible spectrum. This translates into a broad bandwidth for possible operation in a photonic or optoelectronic device. In the future, it should also be possible to excite these silicon nanowires electrically.
"If you can make the silicon emit light itself, you don't have to have an external light source on the chip," Agarwal said. "We could excite the silicon electrically and get the same effect, and we can make it work with wires from 20 to 100 nanometers in diameter, so it's very compatible in terms of length scale with current electronics."
The research was supported by the U.S. Army Research Office and the National Institutes of Health.
source:sciencedaily

New Type of Solar Structure Cools Buildings in Full Sunlight


New Type of Solar Structure Cools Buildings in Full Sunlight

Mar. 27, 2013 — A Stanford team has designed an entirely new form of cooling panel that works even when the sun is shining. Such a panel could vastly improve the daylight cooling of buildings, cars and other structures by radiating sunlight back into the chilly vacuum of space.
Professor Shanhui Fan (center) with graduate students Aaswath Raman (left) and Eden Rephaeli (right). (Credit: Norbert von der Groeben)
Homes and buildings chilled without air conditioners. Car interiors that don't heat up in the summer sun. Tapping the frigid expanses of outer space to cool the planet. Science fiction, you say? Well, maybe not any more.
A team of researchers at Stanford has designed an entirely new form of cooling structure that cools even when the sun is shining. Such a structure could vastly improve the daylight cooling of buildings, cars and other structures by reflecting sunlight back into the chilly vacuum of space. Their paper describing the device was published March 5 inNano Letters.
"People usually see space as a source of heat from the sun, but away from the sun outer space is really a cold, cold place," explained Shanhui Fan, professor of electrical engineering and the paper's senior author. "We've developed a new type of structure that reflects the vast majority of sunlight, while at the same time it sends heat into that coldness, which cools humanmade structures even in the day time."
The trick, from an engineering standpoint, is two-fold. First, the reflector has to reflect as much of the sunlight as possible. Poor reflectors absorb too much sunlight, heating up in the process and defeating the purpose of cooling.
The second challenge is that the structure must efficiently radiate heat back into space. Thus, the structure must emit thermal radiation very efficiently within a specific wavelength range in which the atmosphere is nearly transparent. Outside this range, Earth's atmosphere simply reflects the light back down. Most people are familiar with this phenomenon. It's better known as the greenhouse effect -- the cause of global climate change.
Two goals in one
The new structure accomplishes both goals. It is an effective a broadband mirror for solar light -- it reflects most of the sunlight. It also emits thermal radiation very efficiently within the crucial wavelength range needed to escape Earth's atmosphere.
Radiative cooling at nighttime has been studied extensively as a mitigation strategy for climate change, yet peak demand for cooling occurs in the daytime.
"No one had yet been able to surmount the challenges of daytime radiative cooling -- of cooling when the sun is shining," said Eden Rephaeli, a doctoral candidate in Fan's lab and a co-first-author of the paper. "It's a big hurdle."
The Stanford team has succeeded where others have come up short by turning to nanostructured photonic materials. These materials can be engineered to enhance or suppress light reflection in certain wavelengths.
"We've taken a very different approach compared to previous efforts in this field," said Aaswath Raman, a doctoral candidate in Fan's lab and a co-first-author of the paper. "We combine the thermal emitter and solar reflector into one device, making it both higher performance and much more robust and practically relevant. In particular, we're very excited because this design makes viable both industrial-scale and off-grid applications."
Using engineered nanophotonic materials the team was able to strongly suppress how much heat-inducing sunlight the panel absorbs, while it radiates heat very efficiently in the key frequency range necessary to escape Earth's atmosphere. The material is made of quartz and silicon carbide, both very weak absorbers of sunlight.
Net cooling power
The new device is capable of achieving a net cooling power in excess of 100 watts per square meter. By comparison, today's standard 10-percent-efficient solar panels generate the about the same amount of power. That means Fan's radiative cooling panels could theoretically be substituted on rooftops where existing solar panels feed electricity to air conditioning systems needed to cool the building.
To put it a different way, a typical one-story, single-family house with just 10 percent of its roof covered by radiative cooling panels could offset 35 percent its entire air conditioning needs during the hottest hours of the summer.
Radiative cooling has another profound advantage over all other cooling strategy such as air-conditioner. It is a passive technology. It requires no energy. It has no moving parts. It is easy to maintain. You put it on the roof or the sides of buildings and it starts working immediately.
A changing vision of cooling
Beyond the commercial implications, Fan and his collaborators foresee a broad potential social impact. Much of the human population on Earth lives in sun-drenched regions huddled around the equator. Electrical demand to drive air conditioners is skyrocketing in these places, presenting an economic and an environmental challenge. These areas tend to be poor and the power necessary to drive cooling usually means fossil-fuel power plants that compound the greenhouse gas problem.
"In addition to these regions, we can foresee applications for radiative cooling in off-the-grid areas of the developing world where air conditioning is not even possible at this time. There are large numbers of people who could benefit from such systems," Fan said.
source:sciencedaily

Magnetic Fingerprints of Interface Defects in Silicon Solar Cells Detected


Magnetic Fingerprints of Interface Defects in Silicon Solar Cells Detected

Mar. 27, 2013 — Using a highly sensitive method of measurement, HZB physicists have managed to localize defects in amorphous/crystalline silicon heterojunction solar cells. Now, for the first time ever, using computer simulations at Paderborn University, the scientists were able to determine the defects' exact locations and assign them to certain structures within the interface between the amorphous and crystalline phases.
Using a highly sensitive method of measurement, HZB physicists have managed to localize defects in amorphous/crystalline silicon heterojunction solar cells. Now, for the first time ever, using computer simulations at Paderborn University, the scientists were able to determine the defects' exact locations and assign them to certain structures within the interface between the amorphous and crystalline phases. (Credit: HZB / University Paderborn)
In theory, silicon-based solar cells are capable of converting up to 30 percent of sunlight to electricity -- although, in reality, the different kinds of loss mechanisms ensure that even under ideal lab conditions it does not exceed 25 %. Advanced heterojunction cells shall affront this problem: On top of the wafer's surface, at temperatures below 200 °C, a layer of 10 nanometer disordered (amorphous) silicon is deposited. This thin film is managing to saturate to a large extent the interface defects and to conduct charge carriers out of the cell. Heterojunction solar cells have already high efficiency factors up to 24,7 % -- even in industrial scale. However, scientists had until now only a rough understanding of the processes at the remaining interface defects.
Now, physicists at HZB's Institute for Silicon Photovoltaics have figured out a rather clever way for detecting the remaining defects and characterizing their electronic structure. "If electrons get deposited on these defects, we are able to use their spin, that is, their small magnetic moment, as a probe to study them," Dr. Alexander Schnegg explains. With the help of EDMR, electrically detected magnetic resonance, an ultrasensitive method of measurement, they were able to determine the local defects' structure by detecting their magnetic fingerprint in the photo current of the solar cell under a magnetic field and microwave radiation.
Theoretical physicists of Paderborn University could compare these results with quantum chemical computer simulations, thus obtaining information about the defects' positions within the layers and the processes they are involved to decrease the cells' efficiency. "We basically found two distinct families of defects," says Dr. Uwe Gerstmann from Paderborn University, who collaborates with the HZB Team in a program sponsored by Deutsche Forschungsgemeinschaft (DFG priority program 1601). "Whereas in the first one, the defects are rather weakly localized within the amorphous layer, a second family of defects is found directly at the interface, but in the crystalline silicon."
For the first time ever the scientists have succeeded at directly detecting and characterizing processes with atomic resolution that compromise these solar cells' high efficiency. The cells were manufactured and measured at the HZB; the numerical methods were developed at Paderborn University. "We can now apply these findings to other types of solar cells in order to optimize them further and to decrease production costs," says Schnegg.
source:sciencedaily

Researchers Attach Lyme Disease Antibodies to Nanotubes, Paving Way for Diagnostic Device


Researchers Attach Lyme Disease Antibodies to Nanotubes, Paving Way for Diagnostic Device

Mar. 26, 2013 — Early diagnosis is critical in treating Lyme disease. However, nearly one quarter of Lyme disease patients are initially misdiagnosed because currently available serological tests have poor sensitivity and specificity during the early stages of infection. Misdiagnosed patients may go untreated and thus progress to late-stage Lyme disease, where they face longer and more invasive treatments, as well as persistent symptoms.
An illustration of a Lyme antibody attached to a carbon nanotube. (Credit: Image courtesy of University of Pennsylvania)
Existing tests assess the presence of antibodies against bacterial proteins, which take weeks to form after the initial infection and persist after the infection is gone. Now, a nanotechnology-inspired technique developed by researchers at the University of Pennsylvania may lead to diagnostics that can detect the organism itself.
The study was led by professor A. T. Charlie Johnson of the Department of Physics and Astronomy in Penn's School of Arts and Sciences along with graduate student Mitchell Lerner, undergraduate researcher Jennifer Dailey and postdoctoral fellow Brett R. Goldsmith, all of Physics. They collaborated with Dustin Brisson, an assistant professor of biology who provided the team with expertise on the bacterium.
Their research was published in the journal Biosensors and Bioelectronics.
"When you're initially infected with the Lyme disease bacterium, you don't develop antibodies for many days to a few weeks," Johnson said. "Many people see their physician before antibodies develop, leading to negative serological test results. And after an initial infection, you're still going to have these antibodies, so using these serological diagnostics won't make it clear if you're still infected or not after you've been treated with antibiotics."
The research team's idea was to flip the process around, using laboratory-produced antibodies to detect the presence of proteins from the organism. This is an extension of previous work Johnson's lab has done connecting other biological structures, such as olfactory receptors and DNA, to carbon nanotube-based devices.
Carbon nanotubes, rolled-up lattices of carbon atoms, are highly conductive and sensitive to electrical charge, making them promising components of nanoscale electronic devices. By attaching different biological structures to the exteriors of the nanotubes, they can function as highly specific biosensors. When the attached structure binds to a molecule, that molecule's charge can affect the electrical conduction of the nanotube, which can be part of an electrical circuit like a wire. Such a device can therefore provide an electronic read-out of the presence, or even concentration, of a particular molecule.
To get the electrical signal out of these nanotubes, the team first turned them into transistor devices.
"We first grow these nanotubes on what amounts to a large chip using a vapor deposition method, then make electrical connections essentially at random," Johnson said. "We then break up the chip and test all of the individual nanotube transistors to see which work the best."
In their recent experiment, Johnson's team attached antibodies that naturally develop in most animals that are infected with the Lyme disease bacterium to these nanotube transistors. These antibodies naturally bind to an antigen, in this case, a protein in the Lyme bacterium, as part of the body's immune response.
"We have a chemical process that lets us connect any protein to carbon nanotubes. Nanotubes are very stable, so we have a very reactive compound that binds to the nanotube and also has a carboxylic acid group on the other end. For biochemists, getting any kind of protein to bind to a carboxylic acid group is just child's play at this point, and we have worked with them to learn how to perform this chemistry on the side wall of nanotubes. "
After using atomic-force microscopy to show that antibodies had indeed bound to the exteriors of their nanotube transistors, the researchers tested them electrically to get a baseline reading. They then put the nanotubes in solutions that contained different concentrations of the target Lyme bacteria protein.
"When we wash away the solution and test the nanotube transistors again, the change in what we measure tells us that how much of the antigen has bound," Johnson said. "And we see the relationship we expect to see, in that the more antigen there was in the solution, the bigger the change in the signal."
The smallest concentration the nanotube devices could detect was four nanograms of protein per milliliter of solution.
"This sensitivity is more than sufficient to detect the Lyme disease bacterium in the blood of recently-infected patients and may be sufficient to detect the bacterium in fluids of patients that have received inadequate treatment," Brisson said.
"We really want the protein we are looking to detect to bind as close to the nanotube as possible, as that is what increases the strength of the electrical signal," Johnson said. "Developing a smaller, minimal version of the antibody -- what we call a single chain variable fragment -- would be a next step.
"Based on our previous work with single chain variable fragments of other antibodies, this would probably make such a device about a thousand times more sensitive."
The researchers suggested that, given the flexibility of their technique for attaching different biological structure, eventual diagnostic tools could incorporate multiple antibodies, each detecting a different protein from the Lyme bacterium. Such a setup would improve accuracy and cut down on the possibility of false-positive diagnoses.
"If we were to do this type of test on a person's blood now, however, we would say the person has the disease," Johnson said. "The first thought is that if you detect any protein coming from the Lyme organism in your blood, you are infected and should get treatment right away."
source:sciencedaily

Super Batteries? Hybrid Ribbons a Gift for Powerful Batteries


Super Batteries? Hybrid Ribbons a Gift for Powerful Batteries

Mar. 25, 2013 — Hybrid ribbons of vanadium oxide (VO2) and graphene may accelerate the development of high-power lithium-ion batteries suitable for electric cars and other demanding applications.
Hydrothermal processing of vanadium pentoxide and graphene oxide creates graphene-coated ribbons of crystalline vanadium oxide, which show great potential as ultrafast charging and discharging electrodes for lithium-ion batteries. (Credit: Ajayan Group/Rice University)
The Rice University lab of materials scientist Pulickel Ajayan determined that the well-studied material is a superior cathode for batteries that could supply both high energy density and significant power density. The research appears online this month in the American Chemical Society journal Nano Letters.
The ribbons created at Rice are thousands of times thinner than a sheet of paper, yet have potential that far outweighs current materials for their ability to charge and discharge very quickly. Cathodes built into half-cells for testing at Rice fully charged and discharged in 20 seconds and retained more than 90 percent of their initial capacity after more than 1,000 cycles.
"This is the direction battery research is going, not only for something with high energy density but also high power density," Ajayan said. "It's somewhere between a battery and a supercapacitor."
The ribbons also have the advantage of using relatively abundant and cheap materials. "This is done through a very simple hydrothermal process, and I think it would be easily scalable to large quantities," he said.
Ajayan said vanadium oxide has long been considered a material with great potential, and in fact vanadium pentoxide has been used in lithium-ion batteries for its special structure and high capacity. But oxides are slow to charge and discharge, due to their low electrical conductivity. The high-conductivity graphene lattice that is literally baked in solves that problem nicely, he said, by serving as a speedy conduit for electrons and channels for ions.
The atom-thin graphene sheets bound to the crystals take up very little bulk. In the best samples made at Rice, fully 84 percent of the cathode's weight was the lithium-slurping VO2, which held 204 milliamp hours of energy per gram. The researchers, led by Rice graduate student Yongji Gong and lead author Shubin Yang, said they believe that to be among the best overall performance ever seen for lithium-ion battery electrodes.
"One challenge to production was controlling the conditions for the co-synthesis of VO2 ribbons with graphene," Yang said. The process involved suspending graphene oxide nanosheets with powdered vanadium pentoxide (layered vanadium oxide, with two atoms of vanadium and five of oxygen) in water and heating it in an autoclave for hours. The vanadium pentoxide was completely reduced to VO2, which crystallized into ribbons, while the graphene oxide was reduced to graphene, Yang said. The ribbons, with a web-like coating of graphene, were only about 10 nanometers thick, up to 600 nanometers wide and tens of micrometers in length.
"These ribbons were the building blocks of the three-dimensional architecture," Yang said. "This unique structure was favorable for the ultrafast diffusion of both lithium ions and electrons during charge and discharge processes. It was the key to the achievement of excellent electrochemical performance."
In testing the new material, Yang and Gong found its capacity for lithium storage remained stable after 200 cycles even at high temperatures (167 degrees Fahrenheit) at which other cathodes commonly decay, even at low charge-discharge rates.
"We think this is real progress in the development of cathode materials for high-power lithium-ion batteries," Ajayan said, suggesting the ribbons' ability to be dispersed in a solvent might make them suitable as a component in the paintable batteries developed in his lab.
Co-authors of the new paper are Rice graduate students Daniel Hashim and Lulu Ma; research scientist Zheng Liu; former Rice visiting researcher Liang Zhan, now an associate professor at East China University of Science and Technology in Shanghai; and faculty fellow Robert Vajtai. Ajayan is the Benjamin M. and Mary Greenwood Anderson Professor in Engineering and a professor of mechanical engineering and materials science, chemistry, and chemical and biomolecular engineering.
The work was funded by the U.S. Army Research Office and the Office of Naval Research through a Multidisciplinary University Research Initiative grant and a National Science Foundation Graduate Research Fellowship grant.
source:sciencedaily

Opposites Attract: How Cells and Cell Fragments Move in Electric Fields


Opposites Attract: How Cells and Cell Fragments Move in Electric Fields

Mar. 28, 2013 — Like tiny, crawling compass needles, whole living cells and cell fragments orient and move in response to electric fields -- but in opposite directions, scientists at the University of California, Davis, have found. Their results, published April 8 in the journal Current Biology, could ultimately lead to new ways to heal wounds and deliver stem cell therapies.
Both whole cells and cell fragments that break off move in an electric field, but in opposite directions. Electric fields may guide cells into wounded tissue to speed repair. (Credit: Videography by Min Zhao/UC Davis)
When cells crawl into wounded flesh to heal it, they follow an electric field. In healthy tissue there's a flux of charged particles between layers. Damage to tissue sets up a "short circuit," changing the flux direction and creating an electrical field that leads cells into the wound. But exactly how and why does this happen? That's unclear.
"We know that cells can respond to a weak electrical field, but we don't know how they sense it," said Min Zhao, professor of dermatology and ophthalmology and a researcher at UC Davis' stem cell center, the Institute for Regenerative Cures. "If we can understand the process better, we can make wound healing and tissue regeneration more effective."
The researchers worked with cells that form fish scales, called keratocytes. These fish cells are commonly used to study cell motion, and they also readily shed cell fragments, wrapped in a cell membrane but lacking a nucleus, major organelles, DNA or much else in the way of other structures.
In a surprise discovery, whole cells and cell fragments moved in opposite directions in the same electric field, said Alex Mogilner, professor of mathematics and of neurobiology, physiology and behavior at UC Davis and co-senior author of the paper.
It's the first time that such basic cell fragments have been shown to orient and move in an electric field, Mogilner said. That allowed the researchers to discover that the cells and cell fragments are oriented by a "tug of war" between two competing processes.
Think of a cell as a blob of fluid and protein gel wrapped in a membrane. Cells crawl along surfaces by sliding and ratcheting protein fibers inside the cell past each other, advancing the leading edge of the cell while withdrawing the trailing edge.
Assistant project scientist Yaohui Sun found that when whole cells were exposed to an electric field, actin protein fibers collected and grew on the side of the cell facing the negative electrode (cathode), while a mix of contracting actin and myosin fibers formed toward the positive electrode (anode). Both actin alone, and actin with myosin, can create motors that drive the cell forward.
The polarizing effect set up a tug-of-war between the two mechanisms. In whole cells, the actin mechanism won, and the cell crawled toward the cathode. But in cell fragments, the actin/myosin motor came out on top, got the rear of the cell oriented toward the cathode, and the cell fragment crawled in the opposite direction.
The results show that there are at least two distinct pathways through which cells respond to electric fields, Mogilner said. At least one of the pathways -- leading to organized actin/myosin fibers -- can work without a cell nucleus or any of the other organelles found in cells, beyond the cell membrane and proteins that make up the cytoskeleton.
Upstream of those two pathways is some kind of sensor that detects the electric field. In a separate paper to be published in the same journal issue, Mogilner and Stanford University researchers Greg Allen and Julie Theriot narrow down the possible mechanisms. The most likely explanation, they conclude, is that the electric field causes certain electrically charged proteins in the cell membrane to concentrate at the membrane edge, triggering a response.
The team also included Hao Do, Jing Gao and Ren Zhao, all at the Institute for Regenerative Cures and the UC Davis departments of Ophthalmology and Dermatology. Sun is co-advised by Mogilner and Zhao; Gao is now working at Yunnan Normal University, Kunming, China, and Ren Zhao is at the Third Military Medical University, Chongqing, China.
The work was funded by the National Institutes of Health, the California Institute for Regenerative Medicine and the National Science Foundation.
source:sciencedaily

Robotic Ants Successfully Mimic Real Colony Behavior


Robotic Ants Successfully Mimic Real Colony Behavior

Mar. 28, 2013 — Scientists have successfully replicated the behaviour of a colony of ants on the move with the use of miniature robots, as reported in the journal PLOS Computational Biology. The researchers, based at the New Jersey Institute of Technology (Newark, USA) and at the Research Centre on Animal Cognition (Toulouse, France), aimed to discover how individual ants, when part of a moving colony, orient themselves in the labyrinthine pathways that stretch from their nest to various food sources.
This image shows the robot ants (Alices) pursuing a light trail around the constructed maze. (Credit: Simon Garnier: Garnier S, Combe M, Jost C, Theraulaz G (2013) Do Ants Need to Estimate the Geometrical Properties of Trail Bifurcations to Find an Efficient Route? A Swarm Robotics Test Bed. PLOS Comput Biol 9(3): e1002903. doi:10.1371/journal.pcbi.1002903. CC)
The study focused mainly on how Argentine ants behave and coordinate themselves in both symmetrical and asymmetrical pathways. In nature, ants do this by leaving chemical pheromone trails. This was reproduced by a swarm of sugar cube size robots, called "Alices," leaving light trails that they can detect with two light sensors mimicking the role of the ants' antennae.
In the beginning of the experiment, where branches of the maze had no light trail, the robots adopted an "exploratory behaviour" modelled on the regular insect movement pattern of moving randomly but in the same general direction. This led the robots to choose the path that deviated least from their trajectory at each bifurcation of the network. If the robots detected a light trail, they would turn to follow that path.
One outcome of the robotic model was the discovery that the robots did not need to be programmed to identify and compute the geometry of the network bifurcations. They managed to navigate the maze using only the pheromone light trail and the programmed directional random walk, which directed them to the more direct route between their starting area and a target area on the periphery of the maze. Individual Argentine ants have poor eyesight and move too quickly to make a calculated decision about their direction. Therefore the fact that the robots managed to orient themselves in the maze in a similar fashion than the one observed in real ants suggests that a complex cognitive process is not necessary for colonies of ants to navigate efficiently in their complex network of foraging trails.
"This research suggests that efficient navigation and foraging can be achieved with minimal cognitive abilities in ants," says lead author Simon Garnier. "It also shows that the geometry of transport networks plays a critical role in the flow of information and material in ant as well as in human societies."
source:sciencedaily

Wednesday 27 March 2013

Stunning image of the night sky captures Northern Lights, Milky Way and a meteorite in the same frame

Stunning image of the night sky captures Northern Lights, Milky Way and a meteorite in the same frame

Read more: http://www.dailymail.co.uk/news/article-2063886/Northern-Lights-Milky-Way-meteorite-captured-photographer-frame.html#ixzz2OlGsGXLt 

They are so often photographed individually by star-gazers all over the world.
But one amateur snapper is celebrating this once-in-a-lifetime picture which includes three stunning natural phenomena - the Northern Lights, Milky Way and a meteorite - in one frame.
It was taken by Tommy Eliassen in Ifjord, Finnmark, Norway, at the very beginning of the new aurora season.
Three in one: Amateur photographer Tommy Eliasson caught this stunning time-lapse image of a meteorite streaking through the Milky Way (above centre) alongside the vibrant colours of the aurora
Three in one: Amateur photographer Tommy Eliasson caught this stunning time-lapse image of a meteorite streaking through the Milky Way (above centre) alongside the vibrant colours of the aurora
In one image, the whole of the frame is filled with the solar system's billions of brightly lit stars - with the vibrant green flashes of the Aurora illuminating the sky.
Another shows the streaks of green flashes dropping to earth like rain with a meteorite soaring through the middle of it - all while a shimmering lake is illuminated by the colourful display.

While another features a bend of green light following the curvature of the earth and almost appearing to slice through the starry sky.
The spectacular images were taken by Tommy on September 25.
He had to wait days until a one-hour window of clear sky broke out, enabling him to quickly grab his camera and take his pictures at the remote lake location.
Tommy, who only began taking pictures in late 90s, said: 'It had been cloudy for days, but I saw on the weather forecast that I had a one-hour window of clear sky on this particular night.
'I quickly went and took some pictures in a regular spot of mine, and thought to myself that I had got some good aurora shots and also some separate good milky way shots.
'But just as the clouds started to come in over the mountains I noticed this faint aurora lining up perfectly beside the milky way.
'Normally the lights from the aurora is much, much stronger than the lights from the stars, so getting the right exposure on both is difficult.
Aurora-inspiring: This image shows the green beam of the Northern Lights cutting through the night sky as it tracks the curvature of the earth
Aurora-inspiring: This image shows the green beam of the Northern Lights cutting through the night sky as it tracks the curvature of the earth
'But it was ideal conditions - almost once in a lifetime.
'I quickly got seven pictures of the scene before the clouds covered the sky again.
'I was so focused on getting it right that I didn't think about it at the time. But afterwards I realised that this was something special and that it might be years before I get an opportunity like it again.'
Tommy, from Mo I Rana, Norway, works in salmon vaccination, which enables him to get around his native nation and view the aurora from various different points.
The 33-year-old, who specialises in night, landscape and time-lapse photography in his spare time, added: 'Ifjord is also a perfect location for this kind of photography because only 10 people live there and it is 130km from the nearest town, so light pollution isn't a problem.
'On this night, I had been out test-shooting my new Nikon D700 camera in the garden.
'It was two days before the new moon so the conditions were almost perfect for aurora and star photography.'
Tommy used a Nikon D700 with a 14-24mm f/2.8 lens, with a high ISO of between 4000-8000 and exposures between 25-30 seconds.
Side by side: The billions of stars of the Milky Way and the Northern Lights appear to fall like rain over mountains and a moody lake. Mr Eliassen waited for days to get the perfect shots
Side by side: The billions of stars of the Milky Way and the Northern Lights appear to fall like rain over mountains and a moody lake. Mr Eliassen waited for days to get the perfect shots


Black Hole Flings Companion Star at 2,000,000 Kilometers Per Hour!


Black Hole Flings Companion Star at 2,000,000 Kilometers Per Hour!David Hardy RS Oph

Astronomical artist David Hardy's phenomenal drawing of a  star orbiting a collapsed companion. Not originally meant to represent the system described here, it nonetheless paints a good picture of it. Literally.
Image credit: David Hardy
Life near a black hole is…unpleasant. Especially if you’re a star, and especially if you’re prone to motion sickness. Case in point: A black hole was recently discovered that has a star orbiting it at an incredible two million kilometers per hour (1.2 million mph). Even though the star is about million kilometers (600,000 miles) away from the black hole—a little more than twice the distance of the Earth to the Moon—it screams around its orbit in just 2.4 hours.
Just thinking about it makes me want to hurl.
But it’s still pretty exciting, and just this little bit of knowledge tells us a lot about the system. And it has quite a history.
First, this record-breaking pair is at least 1000 light years away—an exact distance is pretty hard to determine, and it could be more than 20 times farther away than that—andwas discovered in 2010 simultaneously by NASA’s Swift and the Japanese MAXI satellites (which is why the system is named MAXI J1659−152; after the satellite and the system’s coordinates on the sky).
Black holes can emit fiercely bright X-rays as they gobble matter down: The material is heated tremendously and gets whipped up by the ridiculously strong magnetic fields around the black hole and in the material itself. It can reach temperatures of millions of degrees and blast out X-rays that can be visible from Earth. In this case, matter is being drained off the companion star, which falls into a disk around the black hole before taking The Final Plunge. It’s the disk that is so hot and bright.
The European X-ray observatory XMM-Newton stared at the duo for over 14 hours, and confirmed what was first seen by Swift: a very regular dip in the light every 2.4 hours. From Earth we see the system revolving close to edge-on, and as the star and black hole orbit each other a small irregularity in the disk is probably blocking the X-rays, causing the dip. That’s how astronomers determined the orbital period. 
The thing is, there’s no way the companion star—a red dwarf about 1/4th the mass of the Sun—could’ve formed that close to a black hole. So how did it get there?
Here’s how this all may have happened. I’m guessing with some of this, so fairly warned be thee, says I, but it’s based on studying supernovae and knowing a bit about how systems like this evolve. Caveat lector.
A long time ago, perhaps billions of years ago, a binary star was born. One was a modest star, red/orange and cool, somewhat less massive than the Sun. The other was a monster, a massive blue hot-head probably 20 times the Sun’s mass or more. They were separated by quite a ways, tens or hundreds of millions of kilometers.
The blue star ate its fuel up pretty rapidly. By the time it was only a few million years old it was already dying. It swelled up into a red supergiant, expanding so hugely that if you were to replace the Sun with it, the surface of the star would stretch out well past the orbit of Mars! At this point the red supergiant would have been blasting out a super-solar wind, draining away its outer layers.
Still, the star swelled up so immensely it’s entirely possible that the dwarf star companion literally found itself inside the red supergiant’s atmosphere. You might think that would slow it to a stop quickly, but in reality space is big, and red supergiant atmospheres whisper thin. But—although it would take a long time—eventually friction would win and the dwarf would slowly spiral in toward the core of the star.
supernova!
Supernova painting by my old friend and fantastic space artist Dan Durda.
Image credit: Dan Durda
Then, disaster. The core of the supergiant ran out of fuel and collapsed. This would start a chain reaction that ends in a supernova, one of the most powerful explosions in the Universe. The outer layers of the star are ripped away in the explosion, and the core collapsed down into a black hole.
At this point, the smaller star was much closer in to the core of the star, and suffered the brunt of the blast. It may have lost mass itself as the fury of the supernova blew past it, but it would survive, and due to a quirk of physics wouldn’t have been ejected unless the primary star lost a bit more than half its mass in the supernova event. Since we still see the star there, we know this didn’t happen. That means the smaller star remained bound, now on an elliptical orbit that dipped close in to the newly-formed black hole.
Over time, the gravity from the black hole would force the orbit to become circular. If the star were close enough, the black hole’s gravity would be able to strip material from the outer part of the star, forming a bright, hot disk…and leaving us where we are now.
A lot of the details of the history of the system are important. How massive was the smaller star to start with? That would help determine the age of the system. How close did it get to the core of the bigger star before the latter exploded? How much mass did it lose? We’re not even sure of the mass of the black hole, though it’s probably between 3 and 20 or so times the mass of the Sun—a pretty normal mass for a stellar black hole. A lot of what we see in the system now depends on what the two stars were originally like, and that information may be lost forever.
But what of the future? MAXI J1659−152 was discovered because it had an outburst, a sudden flare of brightness. It may be that the smaller star is undergoing some paroxysms as it feeds the black hole material. Over time it will lose more mass to the black hole and slowly spiral in toward it. There will come a day, inevitably, when the fierce tides of its more massive master will tear it apart. When that happens, the outburst will make what it’s doing now look downright weak in comparison. We’ve seen what happens when a black holes tears apart a star (with followups here and here) and it’s a fairly dramatic event. And by that I mean really, really, really dramatic. Read those posts; I won’t spoil it here. But holy yikes. After all, a whole star is getting torn apart by a black hole.
Black hole eats a star
A computer simuilation, based on real physics, of a black hole tearing apart a star. Click to schwarzschildenate.
I have to say: Looking over what I just wrote, and assuming I’m in the ballpark, there’s not one thing about this binary that isn’t weird and awesomely cool. Seriously, every step of the way is just flippin’ amazing, right down to the idea of a black hole being able to toss around a whole star at speeds hundreds of times faster than a rifle bullet.
And yet we only happened to twig on to it because a celestial hiccup caught our attention a few years ago. As always, I have to wonder: What the heck else is out there just waiting to be found?
source:http://www.slate.com