We host news of discoveries in various fields of science with the focus on space, medical treatments, fringe science, microbiology, chemistry and physics, while providing commercial and cultural contexts and deeper insight. @http://koyalgroupinfomag.com/blog/

Tuesday, December 9, 2014

The Koyal Group Info Mag Review: Shaping Public Perceptions of Radiation Risk

On Monday, November 17, the US House of Representatives passed H.R. 5544, the Low Dose Radiation Research Act, which called for the National Academies to “conduct a study assessing the current status and development of a long-term strategy for low dose radiation research.”

Coincidentally that was the same day that the National Academy of Sciences hosted a publicly accessible, all day meeting to determine if there had been enough new developments in radiation health effects research to justify the formation of a new BEIR (Biological Effects of Ionizing Radiation) committee. If formed, that would be BEIR VIII, the latest in a series of committees performing a survey of available research on the health effects of atomic (now ionizing) radiation.

I had the pleasure of attending the meeting, which was held in the ornate NAS building on Constitution Avenue in Washington, DC. There were about 20 presenters talking about various aspects of the scientific and political considerations associated with the decision to form BEIR VIII. Several of the presenters had performed experimental research under the currently moribund Department of Energy’s Low Dose radiation research program.

That intriguing program was using modern genetics techniques to learn a great deal about the dynamic nature of DNA in organisms and about the ways that living tissues isolate and repair recurring damage that comes as a result of metabolic processes, heat, chemicals and ionizing radiation. It was defunded gradually beginning in 2009 and completely by 2011, with the money making its way to solar and wind energy research as the Office of Science shifted its priorities under a flat top line budget.

The agenda allocated a considerable amount of time for public comments. There were a couple of members of the audience interested in the science falsifying the “no safe dose” model who took advantage of the opportunities to speak, but so did a number of professional antinuclear activists from Maryland, Ohio, New York and Tennessee.

Need Better Results This Time

An epic struggle with important health, safety, cost and energy abundance implications is shaping up with regard to the way that the officially sanctioned science and regulatory bodies treat the risks and benefits associated with using ionizing radiation at low doses and dose rates for medical uses, industrial uses and power production.

We must make sure that this battle for science, hearts and minds is not as asymmetrical as the one fought in the period between 1954-1964. One skirmish in the battle worth winning will be to encourage the passage of the Low Dose Radiation Research Act and the annual appropriations that will enable it to function long into the future.

Here is a brief version of that lengthy prior engagement, where there were huge winners and losers. Losers included truth, general prosperity, peace and the environment. Partial winners included people engaged in the global hydrocarbon economy in finance, exploration, extraction, refinement, transportation, tools, machines and retail distribution. There were also big financial winners in pharmaceuticals, medical devices, oncology, and agriculture.

Rockefeller Funded Survey

During a 1954 Rockefeller Foundation Board of Trustees meeting, several of the trustees asked the President of the National Academy of Sciences (NAS) if his esteemed organization would be willing to review what was known about the biological effects of atomic radiation.

The board did not have to pick up the phone or send a letter to make that request. Detlev Bronk, who was the serving president of the NAS, was already at the table as a full member of the Rockefeller Foundation Board of Trustees. The board agreed that, based on their interpretations of recent media coverage, the public was confused and not properly informed about the risks of radiation exposure and the potential benefits of the Atomic Age.
The tasking given to the NAS was to form a credible committee that would study the science and issue a report “in a form accessible to seriously concerned citizens.”1

Aside: For historical context, that Foundation board meeting took place within months after President Eisenhower made his “Atoms for Peace” speech in December 1953. That speech to the United Nations announced a shift in focus of the Atomic Age from weapons development to more productive applications like electrical power generation and ship propulsion.

At the time the request to the NAS was made, the Rockefeller Foundation had been funding radiation biology-related research for at least 30 years, including the Drosophila mutation experiments that Hermann Muller conducted during the 1920s at the University of Texas. Foundation board members and supported scientists had been following developments in atomic science since the earliest discoveries of radiation and the dense energy stored inside atomic nuclei.

In March 1948, the Tripartite Conferences on radiation protection, a group that included experienced radiation researchers and practitioners from the US, Canada and the UK, had determined that the permissible doses for humans should be reduced from 1 mGy/day (in SI units) to 0.5 mGy/day or 3 mGy/week.

That reduction was not made because of any noted negative health effects, but to provide an additional safety factor.

In between these two extremes there is a level of exposure, — in the neighborhood of 0.1 r/day — which all experience to date show to be safe, but the time of observation of large numbers of people exposed at this rate under controlled conditions, is too short to permit a categorical assertion to this effect.2

End Aside.

Biological Effects of Atomic Radiation

The first NAS Biological Effects of Atomic Radiation committee began its work in April 1955. There were six subcommittees, each of which authored a section of the committee’s report. The report was identified as a preliminary version that was to be followed with a more technically detailed report scheduled to appear within the next couple of years, if desired by responsible government agencies.

Unlike the documents supporting the permissible dose limits that came out of the Tripartite Commission mentioned in the aside above, the NAS BEAR 1 committee report, especially the section from the Genetics Committee, was professionally promoted and received extensive media coverage and public attention.


The NAS held a press conference announcing the release of the report and answering questions in Washington, DC on June 12. Among other media attention, that press conference resulted in no less than six related articles in the June 13, 1956 edition of the New York Times. Several additional articles were published during the following weeks. The selection of pieces included a lengthy article that started at the top of the right hand column of the paper and continued with another 20-25 column inches on page 17. Read full article here


Sunday, December 7, 2014

The Koyal Group Info Mag Review P53: The Gene That Cracked the Cancer Code by Sue Armstrong – review

“A cure for cancer” – the phrase is so often repeated, surely it must finally materialise? To anyone not familiar with the developing story of cancer research, the position seems tragically unsatisfactory. Billions of pounds and decades of work by thousands of researchers have produced much better prognoses for some cancers, but harsh forms of chemotherapy and radiotherapy are still the standard treatment and the much sought-after magic cure remains tantalisingly out of reach.



As Sue Armstrong points out at the beginning of her book, while we may naively wonder why so many people get cancer, researchers are asking “Why so few?”. Every time a cell divides – skin and digestive-tract cells are constantly proliferating – there is a possibility of genetic errors. For cancer to develop, it requires the control mechanism in just one cell to be thrown into disorder, resulting in unlimited replication of that rogue cell. Considering the stupendous number of cell divisions occurring in the human body the development of cancer is rare. Scientists have long suspected that there is a very powerful protective mechanism at work.


P53 (the name refers to a protein of molecular weight 53 kilodaltons) is the cancer prophylactic for most multicellular organisms; it has been dubbed the guardian of the genome. While cancer has many causes and can be insidiously malignant throughout the body, p53 is the single most unifying factor in the disease: for most kinds of cancer to develop, p53’s suppressor activity has to have been disabled.

It has taken scientists a long time to establish some of the basic facts about cancer. In 1911 the pathologist Peyton Rous reported a virus that caused cancer in chickens. For decades this finding was dismissed: cancer, according to the official line, could not be caused by a virus. Rous lived long enough to see Francis Crick and James Watson’s double helix structure of 1953 establish DNA’s role at the heart of life and for his own theory to be subsequently vindicated; he received the Nobel prize in 1966 for his pioneering work.

How did we come to probe these minute molecular workings of nature? Most popular texts on genomics and molecular biology blithely report the results without offering any insight into how the scientists have reached their conclusions. Armstrong’s book has one of the best accounts I’ve read of how science is actually performed. She asks, what can they actually see? When it comes to a gene, which is only two nanometres wide, the answer is “nothing”; they work by inferring from experiments on things that they can see. As she says: “It is the ‘unseeable’ nature of molecular biology … that makes it so difficult to grasp.” She quotes one of her scientists, Peter Hall: “it’s based on faith, ultimately.” And even when scientists have a good sense of what their experiments are telling them, they’re up against the fact that life is an immensely complicated process: we can land a probe on a distant comet after a 10-year flight because the Newtonian clockwork of bodies in space is predictable. But all-embracing laws of biology are hard to find.

The process of discovery goes like this (and p53 is a classic example): something unexpected and odd turns up; investigation begins; its character gradually becomes clearer but its purpose remains a mystery; then evidence accumulates to suggest a function. That evidence is often misleading and, in the case of p53, a function diametrically opposed to the true one was ascribed to it for 10 years: it was thought to be a cancer-causing protein. Then came the moment of clarity and the potentially great unifying principle was born: in 1989, P53 was revealed as the master tumour suppressor – an order was established at last.

There are great hopes that our knowledge of p53 will lead to novel cancer treatments, but the pattern has grown much more complicated since then. In some situations p53 can cause cancer. For cancers to grow they need a mutated and disabled p53: in science, these cycles of discovery go on forever, and so will the battle between cancer and p53.

But progress is being made. One of the brightest hopes for therapy using p53 is in families with a predisposition to cancer. The reason for this blight is that the family members have each inherited a mutant copy of p53 and are therefore without the normal protection it provides. An experimental gene therapy (Advexin) already exists to correct this, but in 2008 the US regulatory body refused to license the treatment. A similar product, Gendicine, is licensed in China and approval for its clinical use is being sought in the US. One common story in today’s medical research is of remarkable possibilities constantly being blocked by a sluggish regulatory system and the skewed priorities of Big Pharma, which prefers to develop bestselling drugs that will have the widest use.


Armstrong’s book will offer many readers a sense of hope, but might also induce intense frustration at the long time it takes for discoveries in the lab to filter down to hospitals and the marketplace. Nevertheless, we can be sure that p53, even if it is not the “cure for cancer”, will have an honourable role to play in our attempts to find one.

Monday, December 1, 2014

The Koyal Group Info Mag Review: In the Digital Age, Science Publishing Needs an Upgrade (Op Ed)



Daniel Marovitz is CEO of Faculty of 1000. Faculty of 1000 is a publisher for life scientists and clinical researchers, and comprises of three services; F1000Prime, F1000Research and F1000Posters. F1000Research is an open science publishing platform for life scientists that offers immediate publication and transparent peer review. Before that, he was the CEO and co-founder of Buzzumi, a cloud-based enterprise software company. He contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

Quick quiz, which is bigger: the global music industry or scientific publishing? You may be surprised to learn that the music industry racks up $15 billion each year in sales, whereas scientific publishing quietly brings in $19 billion. This "under-the-radar" colossus gets very little attention, yet influences us all.

In many ways, published science tracks and influences the course of our species on this planet. It enables scientists to find out what other researchers are working on and what discoveries they have made. It helps governments decide where to invest and helps universities decide whom to hire. Most people don't give it a second thought but they should. All of us are consumers of science, and perhaps most crucially, all of us are eventually medical patients dependent on the discoveries published in medical journals. The way science is disseminated and the way articles are published is not just a geeky question for librarians — it impacts our society in profound ways.


The history of scientific journals dates back to 1665, when French Journal des s├žavan and the English Philosophical Transactions of the Royal Society first published research results. Around the same time, the first peer review process was recorded at the Royal Society of London. By the 20th century, peer review became common practice to help allocate scientific funding, and before the Internet, all scientific journals were published on paper.

Paper costs money to buy, more money to print, and even more money to transport. It made sense that journals worked hard to find the "best" studies because they were constrained to publishing 10 to 20 articles each month. They limited the number of pages authors could write and severely limited (and sometimes charged the authors extra for) color and additional images. The process was long and laborious for everyone involved, and was constrained by the limits and costs of a necessarily analog world.

You would naturally assume that the Internet Age would have changed all of that, but while all journals now publish online, most of the process is still based on a paper past. This means many perfectly sound articles are rejected, articles take too long to be published, and most articles are published with conclusions, but without the data that supports them. Enough data should be shared by authors to ensure that anyone can replicate their research efforts and achieve similar results.

Such processes seriously bias what is published, impacting all aspects of science and thus society: from new scientific discoveries and the development of new medicines, to scientists' livelihoods and how public money is spent.


Become part of the discussion — on Facebook and Tumblr.

Monday, November 17, 2014

Koyal Info Mag History's first: Space probe lands on comet


Twenty years ago, a space mission to land a probe on a comet was envisioned. And now, ten years and 6 billion kms after, the Rosetta mission have seen success.

It's undoubtedly a resounding accomplishment for European Space Agency (ESA) which launched the Rosetta mission in March 2004. After blasting off from Kourou spaceport of French Guiana, Rosetta and Philae have logged at least 6 billion kms just to reach the comet.
For a time, the spacecraft had gone on standby for almost 3 years. Apparently, it had gone 500 million miles away from the sun that the solar panels can't absorb enough energy to recharge and keep things going. Fortunately, the Rosetta came out of its hibernation just this January and approached its target: comet 67P/Churyumov-Gerasimenko.

The comet, discovered in 1969, orbits the sun at the speed of 135,000 km/h. Measuring 4-km wide, the comet's shape is similar to that of a rubber duck which initially left people doubting if a landing would be possible at all. (This is because if the Philae lander touched down on uneven surface, it could turn over the wrong side -- and has no way to right itself up.)

After its separation from its carrier, Philae started its precarious  seven-hour descent onto the comet.

For the controller team, those were 7 hours of nerve-wracking anticipation, especially as a problem arose just at a critical moment. There seemed to be an issue with its thruster which could result in a rough landing at best. Their failure to amend the fault almost resulted in the cancellation of the mission. Eventually, they decided to proceed in spite of it.

Inside the control center of ESA  in Darmstadt, Germany, tensions were high.

Finally, around 4pm (GMT), Philae's communications reached Earth: touchdown.

“We are there. We are sitting on the surface. Philae is talking to us. We are on the comet,” said Stephan Ulamec, Philae's lander manager at the control room.

The space mission has already registered a number of firsts;  including the first spacecraft to come into close orbit around a comet and of course the first to land a probe. And if things go well, it could also be the first spacecraft to travel with a comet as it circles the sun.

ESA's director general, Jean-Jacques Dordain announced, "We are the first to do this, and that will stay forever."

During another  interview with Koyal Info Mag, Dordain noted, "This is a big step for human civilization."

After putting the Philae lander on ground, the Rosetta spacecraft is expected to orbit around the comet and take more images as well as collect various data as it travels toward the sun.

The Rosetta mission will be completed in December 2015, though if there's enough fuel in the spacecraft it might be given a 6-month extension to do even more daring projects. Philae itself has enough juice to continue working until March before its electronics get fried by the sun's heat. However, it might still continue to cling to the comet for around 6 years before losing its grip.

This daring space mission aims to further study the molecular and physical composition of a comet, believed to be made from materials that existed even before the solar system's creation. Koyal Info Mag hopes more will be known of how the solar system formed and how comets are instrumental to the life-sustaining qualities of a planet like Earth.

Sunday, November 16, 2014

The Koyal Group Info Mag: How turning science into a game rouses more public interest


Chris Lintott first met Kevin Schawinski in the summer of 2007 at the astrophysics department of the University of Oxford. Lintott had just finished a PhD at the University College of London on star formation in galaxies. He was also something of a minor celebrity in the astronomy community: he was one of the presenters of the BBC's astronomy programme The Sky at Night alongside Sir Patrick Moore, and had written a popular science book called Bang!: The Complete History of the Universe with Moore and Brian May, the Queen guitarist and astrophysicist. "I went to give a seminar talk as part of a job interview," Lintott recalls. "And this guy in a suit jumped up and started having a go at me because I hadn't checked my galaxy data properly. I thought it was some lecturer who I'd pissed off, but it turned out to be Kevin [Schawinski], who was a student at the time."

Most galaxies come in two shapes: elliptical or spiral. Elliptical galaxies can have a range of shapes, from perfectly spherical to a flattened rugby-ball shape. Spirals, like the Milky Way, have a central bulge of stars surrounded by a thin disk of stars shaped in a spiral pattern known as "arms". The shape of a galaxy is an imprint of its history and how it has interacted with other galaxies over billions of years of evolution. It is a mystery to astronomers why they have these shapes and how the two geometries related to one another. For a long time, astronomers assumed that spirals were young galaxies, with an abundance of stellar nurseries, where new stars were being formed. These regions typically emitted hot, blue radiation. Elliptical galaxies, on the other hand, were thought to be predominantly old, replete of dying stars, which are colder, and therefore have a red colour. Schawinski was working on a theory which contradicted this paradigm. To prove it, he needed to find elliptical galaxies with blue regions, where starformation was taking place.

At the time, astronomers relied on computer algorithms to filter datasets of images of galaxies. The biggest bank of such images came from the Sloan Digital Sky Survey, which contained more than two million astronomical objects, nearly a million of which were galaxies, and had been taken by an automated robotic telescope in New Mexico with a two-metre mirror. The problem was that computers can easily filter galaxies based on their colour, however it was impossible for an algorithm to pick up galaxies based on their shape. "It's really hard to teach a computer a pattern-recognition task like this," says Schawinski, currently a professor in astronomy at the Swiss Federal Institute of Technology in Zurich. "It took computer scientists a decade to [teach a computer] to tell human faces apart, something every child can do the moment they open their eyes." The only way to prove this theory, Schawinski decided, was to look at each galaxy image, one by one.

Schawinski did it for a week, working 12 hours every day. He would go to his office in the morning, click through images of galaxies while listening to music, break for lunch, and continue until late in the evening. "When I attended Chris's seminar, I had just spent a week looking through fifty thousand galaxies," says Schawinski.

When Lintott moved to Oxford, he and Schawinski started debating the problem of how to classify datasets with millions of images. They weren't the only ones. "Kate Land, one of my colleagues, was intrigued about a recent paper which claimed most galaxies were rotating around a common axis," Lintott says. "Which is indeed puzzling because the expectation was that these axes would be totally random." Land needed more data, which required looking at the rotation of tens of thousands of galaxies. "Out of the blue she asked me if I thought that, if they put a laptop with galaxy images in the middle of a pub, would people classify them?" Lintott recalls.

At the time, Nasa had launched a project called Stardust@home, which had recruited about 20,000 online volunteers to identify tracks made by interstellar dust in samples from a comet. "We thought that if people are going to look at dust tracks, then surely they'll look at galaxies," says Lintott. Once it was decided they would go ahead with the project, they built a website within days. The homepage displayed the image of a galaxy from the dataset. For each image, the volunteers were asked if the galaxy was a spiral or elliptical. If a spiral, they were asked if they could discern the direction of its arms and the direction of its rotation. There were also options for stars, unknown objects and overlapping galaxies.

Charlie Surbey and Liam Sharp
The site, called Galaxy Zoo, launched on July 11, 2007. "We thought we would get at least some amateur astronomers," Lintott says. "I was planning to go to the British Astronomical Society, give a talk and get at least 50 of their members to classify some galaxies for us." Within 24 hours of its launch, Galaxy Zoo was receiving 60,000 classifications per hour. "The cable we were using melted and we were offline for a while," Schawinski says. "The project nearly died there." After ten days, users from all over the world had submitted eight million classifications. By November, every galaxy had been seen by an average of 40 people. Galaxy Zoo users weren't just classifying galactic shapes, they were making unexpected discoveries. Barely a month after launch, Dutch schoolteacher Hanny van Arkel discovered a strange green cluster that turned out to be a never-before-seen astronomical object. Christened Hanny's Voorwerp ("voorwerp" means "object" in Dutch), it remains the subject of intense scientific scrutiny. Later that year, a team of volunteers compiled evidence for a new type of galaxy -- blue and compact -- which they named Pea galaxies.

"When we did a survey of our volunteers we found out they weren't astronomers," Lintott says. "They weren't even huge science fans and weren't that interested in making new discoveries. The majority said they just wanted to make a contribution." With Galaxy Zoo, Schawinski and Lintott developed a powerful pattern-recognition machine, composed entirely of people who could not only process data incredibly quickly and accurately -- aggregating the results via a democratic statistical process -- but also enable individual serendipitous discoveries, a fundamental component of scientific enquiry. With robotic telescopes spewing terabytes of images every year, they found an answer to big data in a big crowd of volunteers. Since Galaxy Zoo's first discoveries, this pioneering approach of crowdsourcing science has gained a strong following not only with the general public but also within the scientific community. Today, there are hundreds of crowdsourcing projects involving a variety of scientific goals, from identifying cancer cells in biological tissues to building nanoscale machines using DNA. These endeavours have resulted in breakthroughs, such as Schawinski and Lintott's discoveries on the subject of star formation, that have merited publication in the most reputed scientific journals. The biggest breakthrough, however, is not the scientific discoveries per se, but the method itself. Crowdsourcing science is a reinvention of the scientific method, a powerful new way of making discoveries and solving problems that could have otherwise remain undiscovered and unsolved.

At around the time Lintott and his team were developing Galaxy Zoo, two computer scientists at the University of Washington in Seattle, Seth Cooper and Adrien Treuille, were trying to use online crowds to solve a problem in biochemistry called protein folding.

A protein is a chain of smaller molecules called amino acids. Its three-dimensional shape determines how it interacts with other proteins and, consequently, its function in the cell. Proteins only have one possible structure, and finding that structure is a notoriously difficult problem: for a given chain of amino acids, there are millions of ways in which it can be folded into a three-dimensional shape. Biochemists know thousands of sequences of amino acids but struggle to find how they fold into the three-dimensional structures that are found in nature.

Cooper and Treuille's lab had previously developed an algorithm which attempted to predict these structures. The algorithm, named Rosetta, required a lot of computer power, so it was adapted to run as a screensaver that online volunteers could install. The screensaver, called Rosetta@home, required no input from volunteers, so Cooper and Treuille had been brought in to turn it into a game. "With the screensaver, users could see the protein and how the computer was trying to fold it, but they couldn't interact with it," Cooper says. "We wanted to combine that computer power with human problem-solving."

Cooper and Treuille were the only computer scientists in their lab. They also had no idea about protein folding. "In some sense, we were forced to look at this very esoteric and abstract problem through the eyes of a child," Cooper says. "Biochemists often tell you that a protein looks right or wrong. It seemed that with enough training you can gain an intuition about how a protein folds. There are certain configurations that a computer never samples, but a person can just look at it and say, 'that's it'. That was the seed of the idea."

The game, called Foldit, was released in May 2008. Players start with a partially-folded protein structure, which has been arrived at by the Rosetta algorithm, and have to manipulate its structure by clicking, pulling and dragging amino acids until they've arrived at its most stable shape. The algorithm calculates how stable the structure is; the more stable, the higher the score.

"When we first trialled the game with the biochemists, they weren't particularly excited," Cooper says. "But then we added a leaderboard, where you could see each other's names and respective scores. After that, we had to shut down the game for a while because it was bringing all science to a halt."


Foldit turned the goal of solving one of biochemistry's hardest problems into a game that can be won by scoring points. Over the past five years, over 350,000 people have played Foldit; these players have been able to consistently fold proteins better than the best algorithms. "Most of these players didn't have a background in biochemistry and they were beating some of the biochemists who were playing the game," Cooper says. "They also discovered an algorithm similar to one that the scientists had been developing. It was more efficient that any previously published algorithms."

Tuesday, November 11, 2014

The Koyal Group Info Mag: How A Failed Experiment On Rats Sparked A Billion-Dollar Infant-Care Breakthrough

Researchers studying massages on rat pups helped advance the science on neo-natal care for premature babies, and they will be awarded on Thursday for their breakthrough. | Les Stocker via Getty Images


WASHINGTON -- At a research lab at Duke University Department of Pharmacology in 1979, a group of scientists sparked a major breakthrough in infant care from a failed experiment on rats.
At the time, Dr. Saul Schanberg, a neuroscientist and physician, was running tests on just-born rats to measure growth-related markers (enzymes and hormones) in their brains. Together with Dr. Cynthia Kuhn and lab technician Gary Evoniuk, he kept getting weird results. With the rat pups separated from their mothers in order to run the experiments, their growth markers kept registering at low levels.

The team varied the trials. They used an anesthetized mother rat to feed the pups during and after the experimentation, and tried keeping the pups and mother in the same cage but with a divider to see if a lack of pheromones was the problem.

The experiment failed,” Kuhn recalled.

So the team approached it from another angle. Instead of stabilizing the rat pups so they could run tests, they tried to figure out what was wrong with the pups in the first place. From a friend, Kuhn had heard theories that massaging the pups could produce positive results. Evoniuk, meanwhile, had watched mother rats groom their pups by vigorously licking them. He proposed doing essentially the same thing, minus the tongue.
The team began using a wet brush to rub the rat pups at different pressure levels. Eventually, they found the right one, and on cue, the deprivation effect was reversed.

"I said, 'Let’s give it a shot,' and it worked the first time and the second time," recalled Evoniuk. "It was just the touch.”

Though they had no way of knowing it, Schanberg’s team had taken the first step in a process that would see the upending of conventional wisdom when it came to post-natal care. Three and a half decades later, the theories that his team stumbled upon by failure would save an estimated billions of dollars in medical costs and affect countless young parents’ lives.

On Thursday night, the team will be rewarded for its work. A coalition of business, university and scientific organizations will present the Golden Goose Award to them and other researchers with similar successful projects. It is a prize given for the purpose of shining a light on how research with odd-sounding origins (really, massaging rat pups?) can produce groundbreaking results. More broadly, it’s meant to showcase the importance of federally funded scientific research.

The work done by Schanberg’s team is inextricably tied to the support of taxpayers -- not just because the group operated from a grant of approximately $273,000 from the National Institutes of Health. As Kuhn and Evoniuk both argued, the breakthrough they were able to produce never could have happened with a private funding source. The demand for an immediate result or for profit wouldn’t have allowed them to pivot off the initial failure.

“It is not a straight path from point A to point B,” said Evoniuk. “There are all kinds of weird little detours. We were really following a detour from where this work started. The federal funding gave people like Saul the ability to follow their scientific instincts and try to find the answers to interesting questions that popped up.”

As Congress members head back to their districts before the midterm elections, fights over science funding appear to be low on the list of priorities. The two parties are in the midst of an informal truce, having put in place budget caps this past winter. And no one seems particularly eager to disrupt that truce, even if science advocates warn it needs upending.

While NIH's funding increased this year from last year, when sequestration forced an estimated $1.55 billion reduction, it still fell $714 million short of pre-sequestration levels. Adjusted for inflation, it was lower than every year but President George W. Bush's first year in office.

Surveying the climate, the American Academy for Arts & Science released a report this week showing that the United States "has slipped to tenth place" among economically advanced nations in overall research and development investment as a percentage of GDP. For science advocates, it was another sobering cause for alarm. Young researchers, they argue, are leaving the field or country. Projects that could yield tremendous biomedical breakthroughs aren't getting off the ground.

Looming over the Golden Goose awards ceremony is this reality: Would an experiment testing rat-pup massages ever survive this political climate? Would it be admonished as waste by deficit hawks in Congress?

“Researchers massaging rats sounds strange, but oddball science saves lives,” said Rep. Jim Cooper (D-Tenn.), who is participating in the awards ceremony. “In this instance, premature babies got a healthier start. If Congress abandons research funding, we could miss the next unexpected breakthrough.”
NIH funding was certainly critical to the successful research behind rat-pup massages. "Without the NIH none of this would have happened, zero," said Kuhn.

But serendipity also played a role. Not long after he made his discovery, Schanberg was at an NIH study section with Tiffany Field, a psychologist at the University of Miami School of Medicine. Field had also been doing research -- also funded by the NIH -- on massage therapies for prematurely born babies. But she was getting poor results.

"We were just sharing our data, basically," Field recalled of that conversation. "I was telling him we were having trouble getting any positive effects with the preemies. … He talked about how his lab technician had an eureka experiment when he saw his mother's tongue licking the babies."

The conclusion reached was that Field probably wasn't massaging the premature babies hard enough. Instead of applying "moderate pressure" (as Schanberg had been doing) she was applying more of a "soft stroking."

A study done on rats became a study on humans. Field changed up her experiment and began to see results right away. Instead of the discomfort felt from that tickle-like sensation, the moderate pressure had a tonic effect, stimulating receptors. Babies' heart rates slowed down; the preemies seemed more relaxed; they were able to absorb food and gain weight; there was more evidence of growth hormone; an increase in insulin; greater bone density; and greater movement of the GI tract. The magnitude of the finding was enormous.

"We published the data and we actually did a cost-benefit analysis at that point and determined we could save $4.8 billion per year by massaging all the preemies, because of all the significant cost savings for the hospital," Field recalled.

Her conclusion challenged the prevailing sentiment of the time that prematurely born babies should be left in incubators, fed intravenously, and not touched immediately after birth lest they become agitated and potentially harmed. But few people listened.

"The only person who paid attention to it was Hillary Clinton," she recalled, noting that Clinton, who was working on a health care reform initiative as First Lady, expressed interest in the research.

Since then, however, conceptions of post-natal care have changed. Subsequent studies have confirmed Field's findings, though others have questioned whether there is enough research or the proper methodology to draw sweeping conclusions. Nevertheless, whereas few people used massage therapies in the '80s and '90s, as of eight years ago 38 percent of natal care units were using those therapies, said Fields. The method is estimated to save $10,000 per infant -- roughly $4.7 billion a year.

Those involved in the research still marvel that the chain of events started with a failed experiment on rats and turned on a fortuitous meeting between two scientists.

"We didn’t set out to figure out how to improve nursing care," said Kuhn. "But we wound up saving a lot of money and helped babies grow better, their cognitive outcome was better, they got out of the [intensive care units] sooner. … There was no downside."

"One thing led to another," said Evoniuk. "We were just kind of following an interesting question not thinking we were going to change medical practice."

Schanberg won't be around to receive his Golden Goose award Thursday night. He died in 2009, and his granddaughter will accept on his behalf. But those who worked with him say that his research remains a testament to the good results that an inquisitive mind and a respectable funding stream can produce. It's a story that scientists may find uplifting.

But it doesn't necessarily have a happy ending.

In the aftermath of her work with Schanberg, Field continued studying natal care, starting the Touch Research Institute at the University of Miami in 1992 with the help from the NIH and Johnson & Johnson. Her work has been widely cited in medical journals and newspaper articles. But the funding streams have run dry, and now she's faced with the prospect of dramatically narrowing the scope of her lifelong work.

"We are faced with having to close the institute because we don’t have any NIH grants," she said. "It used to be a third of us would get the grants. Now they are funding at something like the seventh percentile."

Friday, November 7, 2014

The Koyal Group Info Mag: A glimpse into the inner workings of the 3D brain


- Scientists at the Mercator Research Group creates the new models
- It lets experts make artificial networks of nerve endings in the hippocampus - on a computer screen so they can explore how memories form
- The hippocampus is thought to be one of the oldest regions of the brain
- Scientists are monitoring the way neural signals spread throughout the network time-wise using their new tool
- In the furure they hope to show how animals memorise food and dangers

The way neurons are interconnected in the brain is complicated.

But now scientists have created a new type of computer model to make artificial networks of nerve cells found in the hippocampus region of the brain.

The hippocampus helps us form personal memories, and it is hoped the tool will shed more light on how these memories develop as they move through the region's different structures.


Scientists have created a new type of computer model to make artificial networks of nerve cells in the hippocampus part of the brain. A model of a rat's hippocampus is pictured, with different colours denoting different regions. It is hoped the tool will shed more light on how the hippocampus forms memories

Scientists will also explore how the structure connects to the the brain, and which information arrives where and when, using models.

The model has been created by Dr Martin Pyka and his colleagues from the Mercator Research Group in Germany.

Dr Pyka developed a method that allows the brain's anatomic data and neurons to be reconstructed as a 3D model.

Once built, this 3D model can be manipulated on a computer.


The hippocampus enables humans to navigate space securely and to form personal memories. The region is seahorse shaped and is shaded in red in this illustration


Researchers from the Mercator Research Group in Germany, developed the method that means the brain can be constructed as a 3D model, and can be manipulated on a computer (pictured). Structures that form a rat's hippocampus, including CA1, CA3, subiculum and entorhinal cortex are pictured in blue, red, yellow and green

They claim that their approach is unique because it enables automatic calculation of the neural interconnection based on their position inside the space.

Scientists can generate feasible network structures more easily than using other tools.

They are using the models to monitor the way neural signals spread throughout the network time-wise, according to the study published in the journal Frontiers in Neuroanatomy.

Dr Pyka has, so far, found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies.

In future, this method may help us understand how animals, for example, combine various information to form memories within the hippocampus, in order to memorise food sources or dangers and to remember them in certain situations.

The researchers have so far shown off a model of a rat’s hippocampus including its different layers such as the CA1 and CA3 regions, the subiculum and entorhinal cortex.


Dr Pyka has so far found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies. Neurons in a mouse hippocampus are pictured