Milk, Coffee’s best mate? Maybe not in the quantum regime!

What happens when you pour cold milk over hot coffee? Given enough time and provided you resist the temptation to swirl the mixture around with a spoon, the milk will completely spread into the coffee and you end up with a delicious mug of coffee albeit a little less warm. The heat from the hot coffee atoms is transferred to the cold atoms of milk until they reach a steady state. However, do start sipping your coffee soon or else the molecules in the coffee will transfer most of their heat to the molecules in the air.

Similarly, if you thrust the end of a metallic rod, which is a good conductor of heat, into a flame it would heat the rod and before long you start feeling it at the other end at your hand. Thus, by some mechanism the heat from a hot region is transferred to the cold region. Also, we know that if by accident a metallic rod we hold touches a live wire, electrocution results!

What is the mechanism by which heat and electricity is conducted along metals? Any student of science would tell you that in a good conductor it is due to the freedom that energy carriers (phonons) and charge carriers (electrons) enjoy, which is just not there in an insulator.

This explanation hardly satiates our curiosity and we often want to know the answer at a deeper level. For that we have to dive into the metal and go underneath the surface. What do we see? We would see a sea of electrons in a regular periodic arrangement of positive ions. In metals, electrons are the predominant carriers of energy and charge. The electrical conductivity is directly proportional to the average length an electron can travel in the metal before it gets scattered by an ion. This is the classical picture, which considers electrons like particles colliding with bigger sized ions, something like in a game of marbles.

However, the classical picture is incomplete at the subatomic scale. Since the ions are positively charged they have an electric field (or in other words a potential) associated with them and since they are arranged in a periodic fashion the electron sees a periodic potential. We know from quantum mechanics that electrons have a wave nature. In fact, all matter has an inherent waviness associated with it: even us humans, although in our case the wavelength would be unimaginably small and unfortunately meaningless. Google “De Broglie wave” sometime. The electronic wave function within some energy bands is spread (or delocalized) over the periodic potential of the positive ions.

One interesting question to ask is what would happen if we make the potential aperiodic, thus introducing disorder into the system? Let’s do just that by adding impurities; for example, substituting some Aluminum ions with Copper ions in a lattice of Aluminum ions. The electron sees an aperiodic potential because the substituted Copper ions will have a different potential. The electronic wave function that was earlier spread out tends to localize around the impurity. In the classical model, this can be thought of as a reduction in the average length that an electron can travel leading to a drop in conductivity. Lets keep adding more impurity just to see what happens: beyond a certain amount of impurity the electronic motion stops altogether resulting in the material becoming an insulator. This means the wave function becomes localized around the disorder. This is known as Anderson localization after its discoverer and Nobel Laureate P. W. Anderson.

Anderson localization is due to wave interference between multiple scattering paths (an easier way to think about this is to imagine that the electron zig zags around the impurity resulting in a net zero motion) and is well understood for particles/waves that do not interact with each other in a disordered (with impurities) medium. One may then ask: We saw that electrons are present in abundance in a metal, and there is every possibility they will interact and if they do what happens when we include these interactions? Further investigations of localization in disordered media in the presence of short-range interactions between quantum particles (electrons, for example) led to the concept of many-body localization (MBL).

Let us try to understand many body localization with a macroscopic analogy. Going back to our earlier example of cold milk and hot coffee, this time instead of pouring milk into the coffee let’s add one drop in the center of the cup. In normal systems our everyday experience of milk and coffee mixing completely leaving no trace of where the drop of milk hit the coffee holds true. However, in an ideal MBL system, even after a very long time the milk drop will still stay where it fell as well as maintain its coldness.

Dibyendu Roy, from the Raman Research Institute, along with his collaborators Rajeev Singh from Bar-Ilan University and Roderich Moessner from Max Plank Institute for the Physics of Complex Systems, have recently published a paper on MBL and have the following to say about their work:

MBL is a recently discovered state of solids. It is an insulator and results from the interplay of disorder and interaction between particles. The nature of MBL in more than one spatial dimension is not entirely understood. We have studied properties of MBL in one-dimensional long-range models and made an analogy to infer features of MBL in higher-dimensional short-range models.

To read this interesting paper please click




Scientists may have discovered the most massive planet orbiting two stars

“Data will talk to you if you’re willing to listen to it”, said Jim Bergeson – a well known computer engineer. And that’s probably what scientists from Raman Research Institute (RRI), Bangalore, and Hans Raj College, University of Delhi, seem to have done. While combing through data on orbit of a binary system of star (MXB 1658-298), collected over a 40 years, 1976-2016) period, they have noticed discrepancies in the orbital period of these stars. They attribute this to an invisible ‘third’ object, probably the most massive planet revolving around a binary star, which we have ever known.

In a binary system of stars, two stars orbit each other and in some cases, one of the stars in the binary is a compact star, like a neutron star or a black hole that accretes matter from its companion. In the case of MXB 1658-298, an eclipsing X-ray binary star system, the massive star is believed to be a very dense neutron star, which is actively eating up material from its companion, making it interesting for scientists to study. This system of stars has now become even more appealing after scientists involved in this study discovered discrepancies in the time between the X-ray eclipses.

While comparing observational data obtained from several X-ray telescopes, during three active periods of the binary stars between 1976 to 1978, 1999 to 2001, and 2015 to 2016, the researchers found that the time between X-ray eclipses did not match theoretical predictions. Instead, there appeared to be an invisible force affecting the orbital period of the stars. The invisible force, they think, might be a massive planet (around 20 to 25 times the mass of Jupiter) orbiting the stars. “This work didn’t start as a search for an exoplanet. We were studying the evolution of this X-ray binary – MXB 1658-298. We were just keeping track of the orbital period of the binary”, says Prof. Biswajit Paul, Professor at RRI, explaining the context behind this study.

Detecting a planet around a far-away star can be quite a challenge. When looking through a telescope, the light reflected from the planet is drowned out by the brightness of its parent star or stars, requiring alternate methods to observe them. One way to detect an exoplanet is to measure the dip in the amount of light reaching us, when its orbit is between its parent star and our line of sight from Earth. But in the case of a binary system, knowing the orbital period of the two stars would also help in detecting an exoplanet, as it interacts with the stars through gravity, sometimes changing the orbital period of the stars. For this discovery, the researchers used the latter principle and developed a new technique of measuring the periodic delay in X-rays, to identify the existence of an exoplanet.

“If the arrival of the X-ray eclipses slows down, then it means the orbit is expanding and if the eclipses appear faster than expected,, then the orbit is decaying. Such an evolution of the orbit could be because of the mass transfer between the stars. But in this case, in addition to a decay of the orbit we found that the time of the eclipse was oscillating, which means the binary period was slowing down and then speeding up over a two-year period. This is what prompted us to propose that there could be a third body, which is causing the oscillation in the orbital period. Now, by measuring the change in orbital period, we can calculate the mass of the third body”, explains Prof. Paul.

Although not conclusive, the prediction now opens up new avenues of study on MXB 1658-298. “While there are more than 2000 exoplanets that are known, only 20 ‘circumbinary’ planets (planets that revolve around binary stars) have been discovered. This one is the most massive, it is at a great distance from us, and it is around a LMXB (Low Mass X-ray Binary), which are usually very old systems. So it would be very interesting to study this system further”, signs off Prof. Paul, talking about the enthusiasm this study has created.

About the authors:

Prof. Biswajit Paul is a Professor at the Astronomy and Astrophysics Group at RRI


Chetana Jain is an assistant professor at Hans Raj College, University of Delhi.

Contact: chetanajain11@gmail.combinary system

Stars and star cluster heat up their environment even a million years after their birth, say scientists

BimanNath,SiddhartthaGupta-001How does the presence of a star or a gravitationally bound group of stars (star cluster), influence their environment? A previously unclear phenomenon has now been answered, thanks to a recent collaborative work from scientists at the Raman Research Institute (RRI), the Indian Institute of Science (IISc) and P.N Lebedev Physical Institute, Moscow, Russia. They have successfully developed a theoretical model to simulate the interactions between a star cluster and its surroundings, enhancing our understanding about processes that lead to the formation of stars, clusters and galaxies.

The birth of a star begins with gases accumulating under gravity until it gets hot enough to initiate nuclear fusion — a process where lighter atoms merge to form heavier atoms with an enormous outburst of energy. A strong shock wave then pushes the surrounding gas and debris into a bubble, similar to the Oort cloud surrounding our Sun. Following this, radiation from the newborn star bombards the surrounding gas to further push it away. But now, questions have been raised on the mechanism through which the stars, or clusters of stars, transfer their energy to the surrounding gas. Prevalent theories suggest that gaseous winds from the stars, and the explosions at the end of the life of stars, heat up the gas in their vicinity. This hot gas physically pushes the surrounding gas away, transferring mechanical energy in the process. But this idea did not quite explain some of the observations, thus requiring a deeper study.

The current research represents a paradigm shift. According to the new theoretical insight provided by the collaboration, pressure of the radiation dominates during the interaction between stars and the surrounding gases for the first million years of its life following which the interaction continues through the heating of the gases. Indeed, previous observations have recorded the heating of the surrounding gases. The researchers specifically predict that the heating is due to high energy photons emitted by the star cluster, which bombard the surrounding gas particles thus causing their temperature to increase.   

“What we found, something which wasn’t really anticipated, is that radiation from clusters has a two pronged effect in a way. Initially, the cluster exerts radiation pressure where it interacts by pushing against the gas. After about a million years, the radiation pressure decreases as the gas particles move far away from the cluster, but the thermal pressure starts heating the gases, causing it to expand. This thermal pressure wasn’t understood very well”, explains Biman Nath, a Professor of Astronomy and Astrophysics at RRI.

The researchers then built a computer model to simulate the effects of radiation pressure and thermal pressure from a star cluster on its surrounding environment. When they compared their results to observations from the Tarantula Nebula, also known as 30 Doradus, it was found to match closely. “Earlier observations of the nebula showed that the X-ray luminosity from the star cluster, or the brightness of X-rays, was much lower than what was theoretically predicted. Now, after incorporating our new insights, the observations match the predictions very well”, remarks Mr. Siddhartha Gupta, a postgraduate student at RRI and IISc.

The current study advances our knowledge of the formation and evolution of galaxies, but we are far from a complete understanding of the processes. “Galaxies are made of two major building blocks – gasses and stars. So it’s very important to understand the interactions between these building blocks at a micro level, to really understand how galaxies form. People have looked at the macro-scale interactions, but there are severe gaps in our understanding of these micro-level interactions. This work can be thought of as a starting point towards our understanding of the evolution of galaxies”, says Mr. Gupta about the importance of this work.

How does debris from supernovae make molecules? Scientists may have an answer

Milky way- Seshadri KS_1‘We are all made of stardust’ goes the common saying. The phrase is more than just rhetoric; it alludes to the formation of atoms and molecules in the universe. Most atoms and a few molecules around us were mostly formed in the bowels of exploding stars, which then went on to form planets, oceans, living organisms and everything in between. Now, a collaborative study by Raman Research Institute (RRI), Bangalore, Indian Institute of Science (IISc), Bangalore and P. N. Lebedev Physical Institute, Moscow, is studying the processes that may have led to the formation of these molecules from the debris of the exploding stars.  

Galaxies contain swirling mass of gases that eventually coalesce under gravity to form stars. “In the most common types of galaxies, like our Milky-Way, the star formation rate is between 0.5 and 1 solar masses per year, resulting in one or two supernova explosions in a century”, explains Dr. Arpita Roy, a former research student at RRI and IISc.  Occasionally however, events like close-encounters or collisions with other galaxies can shake things up within a galaxy, causing the rate of star formation to shoot up by 10 or even 100 times. Such galaxies, referred to as starburst galaxies, act as an important window into the birth and evolution of stars and galaxies. “Central regions of starburst galaxies have very high densities and are called starburst nuclei. They are the hubs for very high star formation and hence are, in general, quite violent. They are also the sources of energy, momentum, mass and heavy chemical elements”, adds Dr. Roy.

The current study focused on the processes that lead to the formation of molecules in expanding shock waves caused by supernovae, called superbubbles. “Multiple coherent supernovae in the starburst nuclei create strong shocks or superbubbles. When these strong shocks move through the interstellar medium (ISM), they sweep up ISM materials and store them in dense, thin shells behind the shocks, which further cool and form molecules. These molecular clouds could then again be sites for the formation of second generation of stars”, says Dr. Roy. “It has always been surprising to see how molecules can survive in these extreme violent environments in the central regions of the starburst galaxies. Now, there can be two situations: either these molecules are the old ones, which were originally there in the parent molecular clouds, where the massive stars were initially born, or, these are the new molecules formed in-situ in the dense superbubble shells. Our model tries to understand these issues in detail and describes that molecules in observed outflows in the central regions of the starburst galaxies can be explained by in-situ molecule formation processes” she adds.

The researchers proposed a simplified model in which superbubbles are considered to be roughly spherical in shape. Further, other factors such as the dynamics (velocity), the density and temperature of such a spherical superbubble are calculated. With these values entered in to the model, the researchers ran simulations to predict the processes, which lead to molecule formation. “We performed numerical hydrodynamic simulations with proper numerical descriptions of thermodynamics with all relevant heating (cosmic-ray heating, photo-electric heating, ionising radiation, dust emission etc.) and cooling mechanisms, which then determines the conditions for efficient molecule formation”, explains Prof. Yuri Shchekinov, a Professor at P. N. Lebedev Physical Institute. This model of molecule formation is a collective effort by Prof. Biman Nath at RRI, Prof. Prateek Sharma at IISc along with Dr. Roy and Prof. Yuri Shchekinov

Although a simple model, the simulations matched the observations of molecular outflows in superbubbles with simple spherical morphology. This further confirmed the accuracy of the proposed model and hence the processes which govern molecule formation within a starburst nuclei. The proposed model opens up the prospect of studying other aspects of galaxies and the Universe as a whole. “The detailed information of mass, energy and transport of heavy elements to the interstellar medium (ISM) help us study the overall evolution of the ISM of the host galaxies. These heavy elements may sometimes also enrich the intergalactic medium (IGM) via superbubble evolutions. Therefore, for many astrophysical purposes, such as how stars form and evolve to affect the evolution of the ISM and also the Universe in general, starburst nuclei are the most important experimental sites”, concludes Dr. Roy.

Scientists design “paper” that could be written over and over again using UV light

Yuvaraj_RRI_AzobenzenegoldnpPaper, considered a symbol of knowledge, has been used indiscriminately in the past century causing severe environmental degradation. One study estimates that with all the paper we waste each year, we can build two 12-foot high wall of paper from New Delhi to Bangalore! Electronic storage is not a better alternative since it poses another challenge of handling e-waste that is generated. Now, a collaborative study by researchers headed by Prof. Sandeep Kumar and Dr. A.R Yuvaraj at Raman Research Institute (RRI), Bangalore, and the University of Malaysia, has developed a novel technology that could reduce the use of paper and the generation of e-waste by replacing the way we present information. The researchers have developed an optical storage device made of gold nanoparticles decorated with compounds called azobenzenes.

The newly developed device stores and displays information on a substrate and also restores the substrate back to its original state, using only light. “An optical storage device is a type of display device which can store energy, through light illumination”, explains Dr. A. R. Yuvaraj from RRI. The researchers started by reducing gold chloride to form gold nano-particles. The synthesized particles were then made to react with thiolated azobenzene moieties; chemicals that attach to the gold nano-particles, to form the azobenzene coated gold nanoparticle liquid crystal, measuring just around two nanometres on average. “One of the interesting things we noted was the difference in ways the molecules of this new compound is arranged in its liquid crystal form. Gold nanoparticles are usually arranged in well-defined planes. However, once the azobenzene molecules are attached, the compound loses this arrangement, which gives it some of its unique optical properties”, he adds.

For the new compound to function as an optical storage device, further fabrication of the liquid crystal is required. “We loaded the mixture of newly formed liquid crystal and commercially available LC (5CB) on to a Liquid Crystal prototype cell, made by sandwiching two polyimide coated, unidirectional substrates, which performs as an optical storage device”, explains Dr. Yuvaraj on how the team designed the optical storage device. Information can now be written onto the cell by simply shining UV light onto the cell through a photomask – a photographic pattern that covers some areas of the cell and is transparent to others. The areas that are exposed to light undergo a process called photoisomerization, which allows for information to be stored on the cell.

Photoisomerization is a process by which some molecules undergo a structural change, when energy, in the form of light, is given to the molecules. The molecules, which are in a stable configuration, called trans state, is changed into an unstable configuration, called cis state, on illumination. This allows for information to be written on the compound using photomasks. “Instead of using pens and markers to write, we can now write using just a UV pen. Once written, on shining light of around 450nm wavelength, which falls in the visible range, we can change the compound back to the trans state, thus erasing what was written. We can write and erase just using light”, remarks Dr. Yuvaraj.

The new device allows for easy storage of information by enabling simple writing, erasing and rewriting process all using light. It could replace writing boards in schools, advertisement and display boards, newspapers, magazines and anything that uses paper to store information. By designing the device for permanent optical storage, it could even replace business and ID cards. This innovation can help reduce the stress on trees, save water used for paper production, and reduce paper waste and e-waste, thus conserving our natural resources. A clean and a green way ahead, indeed!

Clearing the Haze: Scientists design a way to see through fog

In peering through a thick early morning mist or looking into a smoke-filled room or scanning muddy waters, we encounter a common problem – vision through such media gets obscured, and we cannot see what lies within. And many a times we have wanted to take pictures in foggy conditions, only to get a coarse image with no discernible features. ‘Seeing’ in these conditions would seem impossible without expensive equipments like thermal imaging cameras or radar technologies. The dream of that perfect picture on a foggy morning could be closer to reality, thanks to a new research. A collaborative study by scientists from Raman Research Institute (RRI), Bengaluru, and the University of Rennes, France are working to make seeing through the haze a reality. 

When a human eye or camera lens ‘sees’ an object, most of the light reflected from the object go straight in to the eye or lens. When a scattering medium, like a gas or fluids, fills the space between the eye and the object, the object appears obscured. This happens because of tiny suspended particles present in such media that scatters the particles light in random directions. “To capture an image, the light rays should travel in straight lines, or should be deviated in a predictable way, as in a lens. However, if the light is scattered in unpredictable ways, only a diffused illumination with no recognisable image is obtained” remarks Prof. Hema Ramachandran of the Raman Research Institute, who led this research activity.

Scientists have been trying for decades to overcome this impediment, as the ability to image through turbid media has a wide range of applications. A few techniques have been developed that either require lasers that give out bursts of light of very short duration or uses cameras that have very short exposure times, both of which are very expensive. Cheaper alternatives usually require much longer data collection and processing times. For most applications, however, one would like to form images in real time, with little or no delay.

Scientists from RRI and the University of Rennes have together addressed this problem, and have developed a simple, inexpensive, yet powerful solution. They have successfully demonstrated, for the first time, instant, real time imaging through strongly scattering media simulating a quarter of a kilometre of  fog. This was achieved without any sophisticated equipment like ultra-short pulsed lasers or ultrafast cameras.  Using an inexpensive LED light source, ordinary scientific camera and by performing computations using a typical desktop computer, they have successfully obtained images within a few thousandths of a second after the camera records the diffuse illumination shots. The images refresh at rates faster than the eye can perceive, thus providing flicker-free real-time images of the scene obscured by strong scattering media.

The research sees many applications where visibility through murky conditions is of significant importance. Medical professionals, pilots, rescuers and even adventurers and photographers, require a clear sight in low-visibility conditions.   “Such real-time imaging through strongly scattering media opens up innumerable possibilities for applications. Compact, low cost portable devices can be made, and used in many different areas. For example, aircraft landing under poor visibility, bio-medical imaging through flesh using ordinary light sources, rescue operations in smoke-filled environments are some areas that can utilise this development”, says  Prof. Ramachandran.

“Realtime imaging through strongly scattering media was achieved by a combination of ideas. The scene was illuminated with light that was modulated in intensity. Due to the strong turbidity of the intervening medium, most of the photons undergo repeated random scattering. Because of this, the unprocessed camera shots show only uniform, diffuse illumination, with no discernible feature. We then applied the concept of quadrature lock-in detection to the problem of distinguishing photons that have not been deviated from their original trajectories from the photons that have been randomly scattered” explains Prof. Ramachandran.

“Conventionally, scientists use the Fourier transform technique, where one takes a time series and finds out the strength of contribution at each frequency over a certain range, and then picks out the dominant frequency. Here, as we know the source modulation frequency, a lot of computation time can be saved by looking at just that one frequency. Not only that, unlike Fourier transform, where data has to be recorded for a length of time before the Fourier transform can be applied, in our approach, we can start the processing as soon as the first shot of diffused illumination is captured. This too is an enormous saving on time. Last, but not the least, we have utilised the wonderful parallel processing capabilities that even a typical desktop computer has. Using the Graphics Processing Unit (GPU) of the computer, we have carried out the computations for each pixel of the image in parallel. All these ideas put together have enabled more than a thousand-fold increase in imaging speed, that has enabled the instant extraction of the hidden scene”, she explains.
She goes on to say “Computers these days have very good GPU’s. Especially the ones used for gaming can already perform multiple tasks simultaneously. We have just utilized this data processing ability of the modern GPU along with our algorithm to successfully see through turbid medium. QLD is easily amenable to speed up on GPUs as compared to FFT”. Thus, by combining the strengths of the new algorithm – QLD, with currently available technology, the researchers have been able to cut down the data processing time by a significant margin. This allows them to capture an image in real time.

Technology today does allow us to see in a cloudy atmosphere with the help of infrared, x-ray and other imaging devices. But, apart from being expensive and often bulky, to be able to see features and details in an image, a camera that captures visible light would have to be used. The proposed new technique would progress the field of medicine, navigation, climate sciences and even space exploration by just recording clearer images and videos even in turbid conditions.

Now test the purity of milk you consume, on the spot…

Milk is considered to be one of the best sources of nutrients like protein, fat, carbohydrates and minerals making it an ideal food for infants and adults alike. The importance of milk to our dietary needs has made it a prime target for adulteration, the most common method known to us all being the addition of water. Unfortunately, it does not stop there. Harmful chemicals including melamine, formalin, detergents, sugars, urea and a host of other substances are used as adulterants. One common adulteration method used in India is to mix an emulsifier to vegetable oil resulting in a white paste. The paste is then diluted and mixed with chemicals like urea until the consistency of milk is achieved. The proportion of these ingredients is calculated to mimic the fat and solid not fat (SNF) percentages of unadulterated milk. The cost to human health and well being upon consumption of this synthetic milk is huge: it deprives consumers of the vital nutrients otherwise obtained from unadulterated milk while at the same time being harmful to health. Currently used testing based on fat and lactometer levels used to detect adulterated milk obviously fail in this scenario. Thus there is an urgent need for developing other measuring techniques to address and overcome this major health concern. For maximum societal impact the measuring platform should be cheap, easy to use, robust and precise with high sensitivity.

Prof. V. Lakshminarayan’s group at RRI has proposed a simple test that satisfies all the above requirements. The test is based on impedance measurement of the ionic constituents of normal milk vs. adulterated milk that can act as a first level screening to check for adulterated milk samples. Additionally, being a hand held device empowers the consumer to demand “on the spot” screening of milk samples.

The basic research was completed at RRI while collaborators in the Department of Electronics Systems and Engineering (DESE) design, Indian Institute of Science and the DST-National Hub for Health Care Instrumentation (NHHID) will undertake the product design and manufacture.

It is envisioned that a simple “Dip and Read” device can be made available at milk collection and distribution centers to rapidly assess for synthetic milk.


A picture of the hand held device to detect synthetic milkUntitled