We host news of discoveries in various fields of science with the focus on space, medical treatments, fringe science, microbiology, chemistry and physics, while providing commercial and cultural contexts and deeper insight. @http://koyalgroupinfomag.com/blog/
Showing posts with label The Koyal Group Info Mag. Show all posts
Showing posts with label The Koyal Group Info Mag. Show all posts

Monday, February 23, 2015

The Koyal Group Info Mag: Higgs Boson Discovered In Superconductors

A team of physicists from India, Israel, Germany and US reportedly detected the Higgs boson, which is believed to be the thing responsible for every mass in the universe, for the first time in superconductors. What's more, these newly-detected Higgs boson using superconductors is more stable and way cheaper to achieve. Scientists will now have an easier way to observe the Higgs boson even in ordinary laboratories.

The so-called 'God particle' was detected 3 years ago in Switzerland using the Large Hadron Collider (LHC) by CERN (European Organization for Nuclear Research). The USD 10 billion LHC is the world's biggest single machine and the most powerful particle collider. It was primarily built for the purpose of finding the Higgs boson.

The lead researcher Professor Aviad Frydman of Bar-Ilan University said, "Just as the CERN experiments revealed the existence of the Higgs boson in a high-energy accelerator environment, we have now revealed a Higgs boson analogue in superconductors.

Proving the presence of Higgs boson is a difficult feat because it can't directly be detected and it is short-lived. Plus, a particle accelerator needs huge amounts of energy.

The energy scale used, The Koyal Group Info Mag reported, was only a thousandth of an electron volt. This is a huge contrast to the giga electron volts needed in accelerators like LHC.

However, only a particular amount of energy is required in superconductors to awake the "Higgs mode" -- too much and it will break the electron pairs that serve as the superconductor's basic charge.

To solve this, Frydman and his team used ultra-thin and disordered "superconducting films of Indium Oxide and Niobium Nitrite near the superconductor-insulator critical point". In theory, once that point is reached, the rapid decay of Higgs will not occur anymore; hence researchers can awake the Higgs mode with only low energies.

"The parallel phenomenon in superconductors occurs on a different energy scale entirely -- just one-thousandth of a single electronvolt. What's exciting is to see how, even in these highly disparate systems, the same fundamental physics is at work," said Frydman.


A superconductor is a special type of metal which allows electrons to move from one atom to another without hindrance when cooled to extremely low temperatures. That's why once it reached the so-called 'critical temperature' and becomes 'superconductive', it does not release sound, heat or any form of energy. Surprisingly, The Koyal Group Info Mag discovered that it was this property of a superconductor which inspired the concept of the Higgs boson five decades ago.

Sunday, November 16, 2014

The Koyal Group Info Mag: How turning science into a game rouses more public interest


Chris Lintott first met Kevin Schawinski in the summer of 2007 at the astrophysics department of the University of Oxford. Lintott had just finished a PhD at the University College of London on star formation in galaxies. He was also something of a minor celebrity in the astronomy community: he was one of the presenters of the BBC's astronomy programme The Sky at Night alongside Sir Patrick Moore, and had written a popular science book called Bang!: The Complete History of the Universe with Moore and Brian May, the Queen guitarist and astrophysicist. "I went to give a seminar talk as part of a job interview," Lintott recalls. "And this guy in a suit jumped up and started having a go at me because I hadn't checked my galaxy data properly. I thought it was some lecturer who I'd pissed off, but it turned out to be Kevin [Schawinski], who was a student at the time."

Most galaxies come in two shapes: elliptical or spiral. Elliptical galaxies can have a range of shapes, from perfectly spherical to a flattened rugby-ball shape. Spirals, like the Milky Way, have a central bulge of stars surrounded by a thin disk of stars shaped in a spiral pattern known as "arms". The shape of a galaxy is an imprint of its history and how it has interacted with other galaxies over billions of years of evolution. It is a mystery to astronomers why they have these shapes and how the two geometries related to one another. For a long time, astronomers assumed that spirals were young galaxies, with an abundance of stellar nurseries, where new stars were being formed. These regions typically emitted hot, blue radiation. Elliptical galaxies, on the other hand, were thought to be predominantly old, replete of dying stars, which are colder, and therefore have a red colour. Schawinski was working on a theory which contradicted this paradigm. To prove it, he needed to find elliptical galaxies with blue regions, where starformation was taking place.

At the time, astronomers relied on computer algorithms to filter datasets of images of galaxies. The biggest bank of such images came from the Sloan Digital Sky Survey, which contained more than two million astronomical objects, nearly a million of which were galaxies, and had been taken by an automated robotic telescope in New Mexico with a two-metre mirror. The problem was that computers can easily filter galaxies based on their colour, however it was impossible for an algorithm to pick up galaxies based on their shape. "It's really hard to teach a computer a pattern-recognition task like this," says Schawinski, currently a professor in astronomy at the Swiss Federal Institute of Technology in Zurich. "It took computer scientists a decade to [teach a computer] to tell human faces apart, something every child can do the moment they open their eyes." The only way to prove this theory, Schawinski decided, was to look at each galaxy image, one by one.

Schawinski did it for a week, working 12 hours every day. He would go to his office in the morning, click through images of galaxies while listening to music, break for lunch, and continue until late in the evening. "When I attended Chris's seminar, I had just spent a week looking through fifty thousand galaxies," says Schawinski.

When Lintott moved to Oxford, he and Schawinski started debating the problem of how to classify datasets with millions of images. They weren't the only ones. "Kate Land, one of my colleagues, was intrigued about a recent paper which claimed most galaxies were rotating around a common axis," Lintott says. "Which is indeed puzzling because the expectation was that these axes would be totally random." Land needed more data, which required looking at the rotation of tens of thousands of galaxies. "Out of the blue she asked me if I thought that, if they put a laptop with galaxy images in the middle of a pub, would people classify them?" Lintott recalls.

At the time, Nasa had launched a project called Stardust@home, which had recruited about 20,000 online volunteers to identify tracks made by interstellar dust in samples from a comet. "We thought that if people are going to look at dust tracks, then surely they'll look at galaxies," says Lintott. Once it was decided they would go ahead with the project, they built a website within days. The homepage displayed the image of a galaxy from the dataset. For each image, the volunteers were asked if the galaxy was a spiral or elliptical. If a spiral, they were asked if they could discern the direction of its arms and the direction of its rotation. There were also options for stars, unknown objects and overlapping galaxies.

Charlie Surbey and Liam Sharp
The site, called Galaxy Zoo, launched on July 11, 2007. "We thought we would get at least some amateur astronomers," Lintott says. "I was planning to go to the British Astronomical Society, give a talk and get at least 50 of their members to classify some galaxies for us." Within 24 hours of its launch, Galaxy Zoo was receiving 60,000 classifications per hour. "The cable we were using melted and we were offline for a while," Schawinski says. "The project nearly died there." After ten days, users from all over the world had submitted eight million classifications. By November, every galaxy had been seen by an average of 40 people. Galaxy Zoo users weren't just classifying galactic shapes, they were making unexpected discoveries. Barely a month after launch, Dutch schoolteacher Hanny van Arkel discovered a strange green cluster that turned out to be a never-before-seen astronomical object. Christened Hanny's Voorwerp ("voorwerp" means "object" in Dutch), it remains the subject of intense scientific scrutiny. Later that year, a team of volunteers compiled evidence for a new type of galaxy -- blue and compact -- which they named Pea galaxies.

"When we did a survey of our volunteers we found out they weren't astronomers," Lintott says. "They weren't even huge science fans and weren't that interested in making new discoveries. The majority said they just wanted to make a contribution." With Galaxy Zoo, Schawinski and Lintott developed a powerful pattern-recognition machine, composed entirely of people who could not only process data incredibly quickly and accurately -- aggregating the results via a democratic statistical process -- but also enable individual serendipitous discoveries, a fundamental component of scientific enquiry. With robotic telescopes spewing terabytes of images every year, they found an answer to big data in a big crowd of volunteers. Since Galaxy Zoo's first discoveries, this pioneering approach of crowdsourcing science has gained a strong following not only with the general public but also within the scientific community. Today, there are hundreds of crowdsourcing projects involving a variety of scientific goals, from identifying cancer cells in biological tissues to building nanoscale machines using DNA. These endeavours have resulted in breakthroughs, such as Schawinski and Lintott's discoveries on the subject of star formation, that have merited publication in the most reputed scientific journals. The biggest breakthrough, however, is not the scientific discoveries per se, but the method itself. Crowdsourcing science is a reinvention of the scientific method, a powerful new way of making discoveries and solving problems that could have otherwise remain undiscovered and unsolved.

At around the time Lintott and his team were developing Galaxy Zoo, two computer scientists at the University of Washington in Seattle, Seth Cooper and Adrien Treuille, were trying to use online crowds to solve a problem in biochemistry called protein folding.

A protein is a chain of smaller molecules called amino acids. Its three-dimensional shape determines how it interacts with other proteins and, consequently, its function in the cell. Proteins only have one possible structure, and finding that structure is a notoriously difficult problem: for a given chain of amino acids, there are millions of ways in which it can be folded into a three-dimensional shape. Biochemists know thousands of sequences of amino acids but struggle to find how they fold into the three-dimensional structures that are found in nature.

Cooper and Treuille's lab had previously developed an algorithm which attempted to predict these structures. The algorithm, named Rosetta, required a lot of computer power, so it was adapted to run as a screensaver that online volunteers could install. The screensaver, called Rosetta@home, required no input from volunteers, so Cooper and Treuille had been brought in to turn it into a game. "With the screensaver, users could see the protein and how the computer was trying to fold it, but they couldn't interact with it," Cooper says. "We wanted to combine that computer power with human problem-solving."

Cooper and Treuille were the only computer scientists in their lab. They also had no idea about protein folding. "In some sense, we were forced to look at this very esoteric and abstract problem through the eyes of a child," Cooper says. "Biochemists often tell you that a protein looks right or wrong. It seemed that with enough training you can gain an intuition about how a protein folds. There are certain configurations that a computer never samples, but a person can just look at it and say, 'that's it'. That was the seed of the idea."

The game, called Foldit, was released in May 2008. Players start with a partially-folded protein structure, which has been arrived at by the Rosetta algorithm, and have to manipulate its structure by clicking, pulling and dragging amino acids until they've arrived at its most stable shape. The algorithm calculates how stable the structure is; the more stable, the higher the score.

"When we first trialled the game with the biochemists, they weren't particularly excited," Cooper says. "But then we added a leaderboard, where you could see each other's names and respective scores. After that, we had to shut down the game for a while because it was bringing all science to a halt."


Foldit turned the goal of solving one of biochemistry's hardest problems into a game that can be won by scoring points. Over the past five years, over 350,000 people have played Foldit; these players have been able to consistently fold proteins better than the best algorithms. "Most of these players didn't have a background in biochemistry and they were beating some of the biochemists who were playing the game," Cooper says. "They also discovered an algorithm similar to one that the scientists had been developing. It was more efficient that any previously published algorithms."

Tuesday, November 11, 2014

The Koyal Group Info Mag: How A Failed Experiment On Rats Sparked A Billion-Dollar Infant-Care Breakthrough

Researchers studying massages on rat pups helped advance the science on neo-natal care for premature babies, and they will be awarded on Thursday for their breakthrough. | Les Stocker via Getty Images


WASHINGTON -- At a research lab at Duke University Department of Pharmacology in 1979, a group of scientists sparked a major breakthrough in infant care from a failed experiment on rats.
At the time, Dr. Saul Schanberg, a neuroscientist and physician, was running tests on just-born rats to measure growth-related markers (enzymes and hormones) in their brains. Together with Dr. Cynthia Kuhn and lab technician Gary Evoniuk, he kept getting weird results. With the rat pups separated from their mothers in order to run the experiments, their growth markers kept registering at low levels.

The team varied the trials. They used an anesthetized mother rat to feed the pups during and after the experimentation, and tried keeping the pups and mother in the same cage but with a divider to see if a lack of pheromones was the problem.

The experiment failed,” Kuhn recalled.

So the team approached it from another angle. Instead of stabilizing the rat pups so they could run tests, they tried to figure out what was wrong with the pups in the first place. From a friend, Kuhn had heard theories that massaging the pups could produce positive results. Evoniuk, meanwhile, had watched mother rats groom their pups by vigorously licking them. He proposed doing essentially the same thing, minus the tongue.
The team began using a wet brush to rub the rat pups at different pressure levels. Eventually, they found the right one, and on cue, the deprivation effect was reversed.

"I said, 'Let’s give it a shot,' and it worked the first time and the second time," recalled Evoniuk. "It was just the touch.”

Though they had no way of knowing it, Schanberg’s team had taken the first step in a process that would see the upending of conventional wisdom when it came to post-natal care. Three and a half decades later, the theories that his team stumbled upon by failure would save an estimated billions of dollars in medical costs and affect countless young parents’ lives.

On Thursday night, the team will be rewarded for its work. A coalition of business, university and scientific organizations will present the Golden Goose Award to them and other researchers with similar successful projects. It is a prize given for the purpose of shining a light on how research with odd-sounding origins (really, massaging rat pups?) can produce groundbreaking results. More broadly, it’s meant to showcase the importance of federally funded scientific research.

The work done by Schanberg’s team is inextricably tied to the support of taxpayers -- not just because the group operated from a grant of approximately $273,000 from the National Institutes of Health. As Kuhn and Evoniuk both argued, the breakthrough they were able to produce never could have happened with a private funding source. The demand for an immediate result or for profit wouldn’t have allowed them to pivot off the initial failure.

“It is not a straight path from point A to point B,” said Evoniuk. “There are all kinds of weird little detours. We were really following a detour from where this work started. The federal funding gave people like Saul the ability to follow their scientific instincts and try to find the answers to interesting questions that popped up.”

As Congress members head back to their districts before the midterm elections, fights over science funding appear to be low on the list of priorities. The two parties are in the midst of an informal truce, having put in place budget caps this past winter. And no one seems particularly eager to disrupt that truce, even if science advocates warn it needs upending.

While NIH's funding increased this year from last year, when sequestration forced an estimated $1.55 billion reduction, it still fell $714 million short of pre-sequestration levels. Adjusted for inflation, it was lower than every year but President George W. Bush's first year in office.

Surveying the climate, the American Academy for Arts & Science released a report this week showing that the United States "has slipped to tenth place" among economically advanced nations in overall research and development investment as a percentage of GDP. For science advocates, it was another sobering cause for alarm. Young researchers, they argue, are leaving the field or country. Projects that could yield tremendous biomedical breakthroughs aren't getting off the ground.

Looming over the Golden Goose awards ceremony is this reality: Would an experiment testing rat-pup massages ever survive this political climate? Would it be admonished as waste by deficit hawks in Congress?

“Researchers massaging rats sounds strange, but oddball science saves lives,” said Rep. Jim Cooper (D-Tenn.), who is participating in the awards ceremony. “In this instance, premature babies got a healthier start. If Congress abandons research funding, we could miss the next unexpected breakthrough.”
NIH funding was certainly critical to the successful research behind rat-pup massages. "Without the NIH none of this would have happened, zero," said Kuhn.

But serendipity also played a role. Not long after he made his discovery, Schanberg was at an NIH study section with Tiffany Field, a psychologist at the University of Miami School of Medicine. Field had also been doing research -- also funded by the NIH -- on massage therapies for prematurely born babies. But she was getting poor results.

"We were just sharing our data, basically," Field recalled of that conversation. "I was telling him we were having trouble getting any positive effects with the preemies. … He talked about how his lab technician had an eureka experiment when he saw his mother's tongue licking the babies."

The conclusion reached was that Field probably wasn't massaging the premature babies hard enough. Instead of applying "moderate pressure" (as Schanberg had been doing) she was applying more of a "soft stroking."

A study done on rats became a study on humans. Field changed up her experiment and began to see results right away. Instead of the discomfort felt from that tickle-like sensation, the moderate pressure had a tonic effect, stimulating receptors. Babies' heart rates slowed down; the preemies seemed more relaxed; they were able to absorb food and gain weight; there was more evidence of growth hormone; an increase in insulin; greater bone density; and greater movement of the GI tract. The magnitude of the finding was enormous.

"We published the data and we actually did a cost-benefit analysis at that point and determined we could save $4.8 billion per year by massaging all the preemies, because of all the significant cost savings for the hospital," Field recalled.

Her conclusion challenged the prevailing sentiment of the time that prematurely born babies should be left in incubators, fed intravenously, and not touched immediately after birth lest they become agitated and potentially harmed. But few people listened.

"The only person who paid attention to it was Hillary Clinton," she recalled, noting that Clinton, who was working on a health care reform initiative as First Lady, expressed interest in the research.

Since then, however, conceptions of post-natal care have changed. Subsequent studies have confirmed Field's findings, though others have questioned whether there is enough research or the proper methodology to draw sweeping conclusions. Nevertheless, whereas few people used massage therapies in the '80s and '90s, as of eight years ago 38 percent of natal care units were using those therapies, said Fields. The method is estimated to save $10,000 per infant -- roughly $4.7 billion a year.

Those involved in the research still marvel that the chain of events started with a failed experiment on rats and turned on a fortuitous meeting between two scientists.

"We didn’t set out to figure out how to improve nursing care," said Kuhn. "But we wound up saving a lot of money and helped babies grow better, their cognitive outcome was better, they got out of the [intensive care units] sooner. … There was no downside."

"One thing led to another," said Evoniuk. "We were just kind of following an interesting question not thinking we were going to change medical practice."

Schanberg won't be around to receive his Golden Goose award Thursday night. He died in 2009, and his granddaughter will accept on his behalf. But those who worked with him say that his research remains a testament to the good results that an inquisitive mind and a respectable funding stream can produce. It's a story that scientists may find uplifting.

But it doesn't necessarily have a happy ending.

In the aftermath of her work with Schanberg, Field continued studying natal care, starting the Touch Research Institute at the University of Miami in 1992 with the help from the NIH and Johnson & Johnson. Her work has been widely cited in medical journals and newspaper articles. But the funding streams have run dry, and now she's faced with the prospect of dramatically narrowing the scope of her lifelong work.

"We are faced with having to close the institute because we don’t have any NIH grants," she said. "It used to be a third of us would get the grants. Now they are funding at something like the seventh percentile."

Friday, November 7, 2014

The Koyal Group Info Mag: A glimpse into the inner workings of the 3D brain


- Scientists at the Mercator Research Group creates the new models
- It lets experts make artificial networks of nerve endings in the hippocampus - on a computer screen so they can explore how memories form
- The hippocampus is thought to be one of the oldest regions of the brain
- Scientists are monitoring the way neural signals spread throughout the network time-wise using their new tool
- In the furure they hope to show how animals memorise food and dangers

The way neurons are interconnected in the brain is complicated.

But now scientists have created a new type of computer model to make artificial networks of nerve cells found in the hippocampus region of the brain.

The hippocampus helps us form personal memories, and it is hoped the tool will shed more light on how these memories develop as they move through the region's different structures.


Scientists have created a new type of computer model to make artificial networks of nerve cells in the hippocampus part of the brain. A model of a rat's hippocampus is pictured, with different colours denoting different regions. It is hoped the tool will shed more light on how the hippocampus forms memories

Scientists will also explore how the structure connects to the the brain, and which information arrives where and when, using models.

The model has been created by Dr Martin Pyka and his colleagues from the Mercator Research Group in Germany.

Dr Pyka developed a method that allows the brain's anatomic data and neurons to be reconstructed as a 3D model.

Once built, this 3D model can be manipulated on a computer.


The hippocampus enables humans to navigate space securely and to form personal memories. The region is seahorse shaped and is shaded in red in this illustration


Researchers from the Mercator Research Group in Germany, developed the method that means the brain can be constructed as a 3D model, and can be manipulated on a computer (pictured). Structures that form a rat's hippocampus, including CA1, CA3, subiculum and entorhinal cortex are pictured in blue, red, yellow and green

They claim that their approach is unique because it enables automatic calculation of the neural interconnection based on their position inside the space.

Scientists can generate feasible network structures more easily than using other tools.

They are using the models to monitor the way neural signals spread throughout the network time-wise, according to the study published in the journal Frontiers in Neuroanatomy.

Dr Pyka has, so far, found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies.

In future, this method may help us understand how animals, for example, combine various information to form memories within the hippocampus, in order to memorise food sources or dangers and to remember them in certain situations.

The researchers have so far shown off a model of a rat’s hippocampus including its different layers such as the CA1 and CA3 regions, the subiculum and entorhinal cortex.


Dr Pyka has so far found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies. Neurons in a mouse hippocampus are pictured



Wednesday, November 5, 2014

The Koyal Group Info Mag: Scientists Explain Why Record-High Antarctic Sea Ice



The Antarctic has been brimming with sea ice in 2014, hovering around record-high extent for most of the year. On September 20, it peaked at 7.78 million square miles, breaking the previous Antarctic sea ice extent record, which was set in 2013. But while it might seem that this record throws into question the validity of global warming, scientists say this just isn’t the case.

Up north, the sea ice loss in the Arctic is exactly what you would expect in a warming world. 2012 was a record-low year for Arctic sea ice in the satellite record, and 2014 was the sixth lowest.

But in the Antarctic, sea ice has been growing year over year. This is in contrast to the ice on Antarctic land, which appears to be shrinking faster than previously estimated. The dichotomy suggests that something much more complicated than “warming equals melting” is occurring, and scientists say that more research is needed to understand the observations.

One possible explanation for the growing sea ice extent is that weather patterns around the Antarctic may be pushing more cold, continental air over the ocean, spurring the formation of more sea ice.

Another possibility actually includes the melting of the ice over Antarctica itself – as the land ice melts, it could be flushing large amounts of fresh water, which freezes faster, into the ocean.

“There hasn’t been one explanation yet that I’d say has become a consensus, where people say, ‘We’ve nailed it, this is why it’s happening,’” said Claire Parkinson, a senior scientist at NASA’s Goddard Space Flight Center. “Our models are improving, but they’re far from perfect. One by one, scientists are figuring out that particular variables are more important than we thought years ago, and one by one those variables are getting incorporated into the models.”

When it comes to comparing the Arctic and the Antarctic, it’s probably best not to draw conclusions from their differences. “Some people have looked at the Antarctic increasing trend and use that to suggest that global warming isn’t happening, or that the increase in the Antarctic is offsetting the decrease in the Arctic and that’s simply not true,” says Walt Meier, a research scientist at the NASA Goddard Space Flight Center, in a video on the record extent. “If you look at the magnitudes of the changes we’re seeing in the wintertime, the Arctic is decreasing about twice as fast as the Antarctic is increasing.”

This year’s record extent will likely prompt new research on the difference between the North and South poles, and for good reason, according to Meier. “They’re more sensitive to climate change, and therefore they’re the regions that we see the biggest effects of climate change so far,” says Meier. “They’re kind of the canary in a coal mine of global warming.”


For more science news from The Koyal Group Info Mag, visit our facebook page and follow us on twitter @koyalgroup

Tuesday, November 4, 2014

The future is disappearing: How humanity is falling short of its grand technological promise by The Koyal Group Info Mag

What I find most interesting about typical visions of the future isn’t all the fanciful and borderline magical technology that hasn’t been invented yet, but rather how much of it actually already exists.

Consider something relatively straightforward, like a multi-touch interface on your closet door that allows you to easily browse and experiment with your wardrobe, offering suggestions based on prior behavior, your upcoming schedule and the weather in the locations where you are expected throughout the day. Or a car that, as it makes real-time navigational adjustments in order to compensate for traffic anomalies, also lets your co-workers know that you will be a few minutes late, and even takes the liberty of postponing the delivery of your regular triple-shot, lactose-free, synthetic vegan latte. There’s very little about these types of scenarios that isn’t entirely possible right now using technology that either already exists, or that could be developed relatively easily. So if the future is possible today, why is it still the future?

I believe there are two primary reasons. The first is a decidedly inconvenient fact that futurists, pundits and science fiction writers have a tendency to ignore: Technology isn’t so much about what’s possible as it is about what’s profitable. The primary reason we haven’t landed a human on Mars yet has less to do with the technical challenges of the undertaking, and far more to do with the costs associated with solving them. And the only reason the entire sum of human knowledge and scientific, artistic and cultural endeavor isn’t instantly available at every single person’s fingertips anywhere on the planet isn’t because we can’t figure out how to do it; it’s because we haven’t yet figured out the business models to support it. Technology and economics are so tightly intertwined, in fact, that it hardly even makes sense to consider them in isolation.

The second reason is the seemingly perpetual refusal of devices to play together nicely, or interoperate. Considering how much we still depend on sneakernets, cables and email attachments for something as simple as data dissemination, it will probably be a while before every single one of our devices is perpetually harmonized in a ceaseless chorus of digital kumbaya. Before our computers, phones, tablets, jewelry, accessories, appliances, cars, medical sensors, etc., can come together to form our own personal Voltrons, they all have to be able to detect each other’s presence, speak the same languages, and leverage the same services.

The two reasons I’ve just described as to why the future remains as such — profit motive and device isolation — are obviously not entirely unrelated. In fact, they could be considered two sides of the same Bitcoin. However, there’s still value in examining each individually before bringing them together into a unified theory of technological evolution.

Profitable, Not Possible

Even though manufacturing and distribution costs continue to come down, bringing a new and innovative product to market is still both expensive and surprisingly scary for publicly traded and historically risk-adverse companies. Setting aside the occasional massively disruptive invention, the result is that the present continues to look suspiciously like a slightly enhanced or rehashed version of the past, rather than an entirely reimagined future.

This dynamic is something we have mostly come to accept as a tenet of our present technology, but conveniently disregard when contemplating the world of tomorrow. Inherent in our collective expectations of what lies ahead seems to be an emboldened corporate culture that has grown weary of conservative product iteration; R&D budgets unencumbered by intellectual property squabbles, investor demands, executive bonuses and golden parachutes; and massive investment in public infrastructure by municipalities that seem constantly on the verge of complete financial collapse – none of which, as we all know, are particularly reminiscent of the world we actually live in.

One of the staples of our collective vision of the future is various forms of implants: neurological enhancements to make us smarter, muscular augmentation to make us stronger, and subcutaneous sensors and transmitters to allow us to better integrate with and adapt to our environments. With every ocular implant that enables the blind to sense more light and higher resolution imagery; with every amputee who regains some independence through a fully articulated prosthetic; and with every rhesus monkey who learns to feed herself by controlling a robotic arm through a brain-computer interface, humanity seems to be nudging itself ever-closer to its cybernetic destiny.

There’s no doubt in my mind that it is possible to continue implanting electronics inside of humans, and organics inside of machines, until both parties eventually emerge as new and exponentially more capable species. However, what I’m not sure of yet is who will pay for all of it outside of research laboratories. Many medical procedures don’t seem to be enjoying the same trends toward availability and affordability as manufacturing processes, and as far as I can tell, insurance companies aren’t exactly becoming increasingly lavish or generous. As someone who is fortunate enough to have reasonably good benefits, but who still thinks long and hard about going to any kind of a doctor for any reason whatsoever due to perpetually increasing copays and deductibles (and perpetually decreasing quality of care), I can’t help regarding our future cybernetic selves with a touch of skepticism. The extent to which the common man will merge with machines in the foreseeable future will be influenced as much by economics and policy as by technological and medical breakthroughs. After all, almost a decade ago researchers had a vaccine that was 100 percent effective in preventing Ebola in monkeys, but until now, the profit motive wasn’t there to develop it further.

Let’s consider a more familiar and concrete data point: air travel. Growing up just a few miles from Dulles Airport outside of Washington, D.C., my friends and I frequently looked up to behold the sublime, delta-wing form of the Concorde as it passed overhead. I remember thinking that if one of the very first supersonic passenger jets entered service only three years after I was born, surely by the time I grew up (and assuming the better part of the planet hadn’t been destroyed by a nuclear holocaust unleashed by itchy trigger fingers in the United States or Soviet Union), surely all consumer air travel would be supersonic. Thirty-eight years after the Concorde was introduced — and 11 years after the retirement of the entire fleet — I think it’s fair to say that air travel has not only failed to advance from the perspective of passengers, but unless you can afford a first- or business-class ticket, it has in fact gotten significantly worse.

It would be unfair of me not to acknowledge that many of us do enjoy in-flight access to dozens of cable channels through a primitive LCD touchscreen (which encourages passengers behind us to constantly poke at our seats, rudely dispelling any hope whatsoever of napping) as well as email-grade Wi-Fi (as opposed to a streaming-media-grade Internet connection), but somehow I’d hoped for a little more than the Food Network and the ability to send a tweet at 35,000 feet about how cool it is that I can send a tweet at 35,000 feet.

Novelty Is Not Progress

I’ve come to the conclusion over the last few years that it’s far too easy to confuse novelty with technological and cultural progress, and nothing in my lifetime has made that more clear than smartphones. It used to be that computers and devices were platforms — hardware and software stacks on top of which third-party solutions were meant to be built. Now, many devices and platforms are becoming much more like appliances, and applications feel more like marginally tolerated, value-add extensions. In some ways, this is a positive evolution, since appliances are generally things that all of us have, depend on, know how to use, and are relatively reasonably priced. But let’s consider a few other attributes of appliances: They typically only do what their manufacture intends; they are the very paragons of planned obsolescence; and they generally operate either entirely in isolation, or are typically only compatible with hardware or services from the same manufacturer.

Admittedly, comparing a smartphone to a blender or a coffee maker isn’t entirely fair since our phones and tablets are obviously far more versatile. In fact, every time I adjust my Nest thermostat with whatever device happens to be in my pocket, or use Shazam to sample an ambient track in a coffee shop, or search for a restaurant in an unfamiliar city and have my phone (or my watch) take me directly to it, I’m reminded that several conveniences and miracles of the future have managed to thoroughly permeate the present. But one of the tricks I’ve learned for evaluating current technologies is to consider it in the broader context of what I want the future to be. And when I contemplate the kind of future I think most of us want — one in which all our devices interoperate, and consumers have full control over the services those devices support and consume (but more on that in a moment) — there’s a lot about modern smartphones, tablets and the direction of computing in general to be very concerned about.

The reality is that novelty, and both technological and cultural progress, are only loosely related. Novelty is usually about interesting, creative or fun new products and services. It’s about iterative progress like eking out a few more minutes of battery life, or shaving off fractions of millimeters or grams, or introducing new colors or alternating between beveled and rounded edges. But true technological and cultural progress is about something much bigger and far more profound: the integration of disparate technologies and services into solutions that are far greater than the sum of their parts.

Progress is about increasing access to information and media as opposed to imposing artificial restrictions and draconian policies; it’s about empowering the world to do more than just shop more conveniently, or inadvertently disclose more highly targetable bits of personal information; it’s about trusting your customers to do the right thing, providing real and tangible value, and holding yourself accountable by giving all the stakeholders in your business the ability to walk away at any moment. And it’s about sometimes taking on a challenge not only for the promise of financial reward, but simply to see if it can be done, or because you happen to be in a unique position to do so, or because humanity will be the richer for it.

I know I’m probably coming across as a postmodern hippie here, but it’s these kinds of idealistic, and possibly even overambitious, aspirations that should be guiding us toward our collective future — even if we know that it isn’t fully attainable.

I want to be able to use my phone to start, debug and monitor my car and my motorcycle. I want the NFC chip in my phone to automatically unlock my workstations as I approach them — regardless of which operating systems I choose to use. I want to be able to pick which payment system my phone defaults to based on who provides the terms and security practices I’m most comfortable with. I want instant access to every piece of digital media on the planet on any device at any time (and I’m more than willing to pay a fair price for it). I want all my devices to integrate, federate and seamlessly collaborate, sharing bandwidth and sensor input, combining themselves like an array of radio telescopes into something bigger and more powerful than what each one represents individually. I want to pick and choose from dozens of different services for connectivity, telephony, media, payments, news, messaging, social networking, geolocation, authentication and every other service that exists now and that will exist tomorrow. I want to pick the PC, phone, tablet, set-top box, watch, eyewear and [insert nonspecific connected device here] that I like best, and be assured that they will all integrate on a deep level, rather than feeling like I’m constantly being penalized for daring to cross the sacred ecosystem barrier. I want a future limited only by what’s possible rather than by intellectual property disputes, petty corporate feuds, service contracts, shareholder value and artificial lock-in.

And more than anything else, I want a future that is as much about making us intellectually and culturally rich as it is about material wealth.

Free as in Speech

Although we are very clearly living in a time (and headed for a future) that is determined as much by what is profitable as what is possible, it’s important to acknowledge that there are plenty of inspiring exceptions. While it’s undeniable that the U.S. space program has recently fallen upon some difficult times (relying on the Russians to ferry astronauts to and from the ISS sure seemed like a good idea at the time), there’s nothing like watching robots conduct scientific experiments on Mars, or reading about the atmospheric composition of exoplanets, to put NASA’s spectacular portfolio of accomplishments into perspective; starting as early as the late ’60s, academics, engineers, computer scientists and the Department of Defense all came together around the concept of interoperability, which ultimately led to the creation of the Internet and the World Wide Web — possibly two of the most politically, culturally and economically important and disruptive inventions in human history; and then there are collaborative resources like Wikipedia; open-source software projects like Linux, the various Apache projects, Bitcoin and Android; open hardware projects like Arduino, WikiHouse and the Hyperloop project; free and open access to GPS signals; and the myriad of incredibly creative crowd-funded Kickstarter projects that seem to make the rounds weekly.

The reality of technology — and perhaps the reality of most things complex, interesting and rewarding enough to hold our collective attention — is that it is not governed by absolutes, but rather manifests itself as the aggregate of multiple and often competing dynamics. I’ve come to think of technology as kind of like the weather: It is somewhat predictable up to a point, and there are clearly patterns from which we can derive assumptions, but ultimately there are so many variables at play that the only way to know for sure what’s going to happen is to wait and see.

But there is one key way in which technology is not like the weather: We can control it. One of my favorite quotes is by the famous computer scientist Alan Kay who once observed that the best way to predict the future is to invent it. If we want to see a future in which devices freely interoperate, and consumers have choices as to what they do with those devices and the services they connect to, it is up to us to both demand and create it. If we choose instead to remain complicit, we will get a future concerned much more with maximizing profits than human potential. Clearly we need to strike the right balance.

Insofar as technology is a manifestation of our creative expression, it is not unlike free speech. And like free speech, we don’t have to always like or agree with what people choose to do with it, but we do have a collective and uncompromising responsibility to protect it.

The Koyal Group Info Mag Nasa is Funding Research on Deep Sleep for Transporting Astronauts to Mars


Putting space travelers into a state of deep sleep has been a staple of interstellar science fiction for quite some time, but despite originating as a far-fetched concept, the idea of using suspended animation to enable deep space travel might soon become science fact.

If you’re unfamiliar with the concept, here’s a quick rundown. Traveling far into space is a tricky endeavor. With existing technology, traveling to a planet like Mars takes about 180 days, for example. Keeping a crew of people alive (and entertained) in space for that long isn’t hard, but it does require a lot of food, water, energy, and other supplies. This makes m

anned long-distance space travel extremely expensive, since hauling more supplies requires huge amounts of storage space, and thousands of additional dollars just to get it all that stuff into orbit.

In theory, suspended animation would help solve this problem. If astronauts could be placed in a deep sleep during the journey, they would require far fewer resources along the way. Instead, they could just be put to sleep at the beginning and woken back up when they arrive at their destination.

Now, with a manned mission to Mars likely in its sights, NASA has begun to explore the viability of such an idea, and has recently funded a study by Atlanta-based aerospace engineering firm SpaceWorks Enterprises to help work out the kinks in the process.
Related: Cryostasis isn’t sci-fi: surgeons will soon use suspended animation to revive gunshot victims

The bulk of the study revolves around placing humans in torpor — a state in which metabolic and physiological activity is drastically slowed down. To do this, the company has developed a three-stage system. Step one involves sedating the person and using a neuromuscular blockade to prevent movement, whereas step two is to physically lower the person’s body temperature by about 10 degrees farenheit, thereby reducing cellular activity and metabolic rate by around 50 to 70 percent. This is achieved with the help of cooling pads and a nasally-inhaled coolant that lowers the subject’s temperature from the inside out. Then, once in torpor, the subject is hooked into an intravenous drip that supplies their body with all the nutrients needed to keep them alive.

Using these methods, SpaceWorks has reportedly managed to keep a person in stasis for a week — an impressive feat, but even so, there’s still much work to be done before the technology is ready for primetime. In addition to extending the length of the stasis period, the company has a handful of other hurdles to overcome. The potential onset of pneumonia, muscle atrophy, and bone loss have yet to be addressed; and the long term-effects of stasis on human organs is still largely unknown. SpaceWorks still has a long road ahead of it, but with a few more years of research, it’s not unreasonable to think that suspended animation, cryostasis, torpor –whatever you want to call it– might finally bring a manned mission to Mars within reach.

Monday, November 3, 2014

The Koyal Group Info Mag - New Discoveries Show More Contact Between Far-Flung Prehistoric Humans Than First Thought

The great global migration of early humans out of Africa went further and higher than previously thought according to two sets of discoveries which suggest that the inhabitants of Easter Island in the Pacific had contact with the people of South America, who in turn had established the highest Stone Age settlement in the world.

The long and controversial debate over whether the early Polynesians managed to completely cross the Pacific Ocean and land in the Americas is supported by a genetics study of the indigenous Easter Islanders, which found that they share DNA sequences with Native Americans – suggesting close contact between the two populations.

Meanwhile in a separate study, archaeologists working in the southern Peruvian Andes have unearthed the remains of an early human settlement nearly 4,500 metres (15,000ft) above sea level, which would have been at the physical limits of surviving and for women to bear children, scientists said.

The archaeologists estimate that the settlement, which includes a stone shelter decorated with rock art, was occupied about 12,000 years ago, within about 2,000 years of the first humans arriving in South America from Central and North America.

Living at such a high altitude would have been difficult because of the cold temperatures, high solar radiation and low oxygen concentrations, but the evidence, which included stone tools used for butchering animals, suggested that the site was occupied for long periods of time.

“We don’t know if people were living there year-round, but we strongly suspect they were not just going there to hunt for a few days, then leaving. There were possibly even families living at these sites, because we’ve found evidence of a whole range of activities,” said Sonia Zarrillo of the University of Calgary in Canada, one of the authors of the study published in the journal Science.

While it is accepted that the Americas were first colonised by people crossing from Asia on a land bridge to Alaska, some scientists have suggested that there was a second colonisation from the east by Polynesians who had experience of sailing long distances in double-hulled canoes.

Two further studies, published in the journal Current Biology, lends support to this controversial hypothesis by discovering genetic links between the native inhabitants of Rapanui – Easter Island – and the indigenous people of South America. This suggests there was contact and interbreeding between the remote Pacific island and the American mainland long before the arrival of the first Europeans.

Anna-Sapfo Malaspinas of the Natural History Museum of Denmark, and colleagues, analysed the DNA of 27 native Rapanui people and found that their genomes on average were about 76 per cent Polynesia, eight per cent Native American and 16 per cent European.

However, further analysis showed that although the European lineage could be explained by contact with white Europeans after the island was “discovered” in 1722 by Dutch sailors, the South American component was much older, dating to between about 1280 and 1495, soon after the island was first colonised by Polynesians in around 1200.

This suggested that either South American Natives had sailed west to Rapanui or that the Rapanui has continued to sail east to South America, and then somehow made the return journey back to the island, nearly 2,500 miles away.

“All sailing voyages heading intentionally east from Rapa Nui would always reach the Americas with a trip lasting two weeks to approximately two months. The return trip appears more challenging,” the scientists said.

Although the latter journey was more difficult, it could also explain why the sweet potato – a native plant of South America – had been grown and eaten across Polynesia long before the first Europeans arrived, they said.

In yet another study, Professor Eske Willerslev of the Natural History Museum in Denmark found that the DNA of two ancient skulls found in Brazil – from the indignenous “Botocudos” natives – have a distinct Polynesian ancestry, with no DNA sequences that can be described as unique to Native Americans.



The Koyal Group Info Mag: Halting the spread of Ebola


Nigeria a model for quick action, scientists find

Ebola. The word brings fear of an unseen and potentially lethal enemy. But there are ways to stop its spread, say infectious disease scientists.

Quick intervention is needed, according to the researchers, who recently published their findings in the journal Eurosurveillance.

Analyzing Ebola cases in Nigeria, a country with success in containing the disease, the scientists estimated the rate of fatality, transmission progression, proportion of health care workers infected, and the effect of control interventions on the size of the epidemic.

Rapid response needed

"Rapid control is necessary, as is demonstrated by the Nigerian success story," says Arizona State University (ASU) scientist Gerardo Chowell, senior author of the paper.

"This is critically important for countries in the West Africa region that are not yet affected by the Ebola epidemic, as well as for countries in other regions of the world that risk importation of the disease."

The research is funded by the U.S. National Science Foundation (NSF)-National Institutes of Health (NIH)-Department of Agriculture (USDA) Ecology and Evolution of Infectious Diseases (EEID) Program.

"Controlling a deadly disease like Ebola requires understanding how it's likely to spread, and knowing the ways of managing that spread that are most likely to be effective," says Sam Scheiner, NSF EEID program director.

"Being able to respond quickly needs a foundation of knowledge acquired over many years. The work of these scientists is testimony to long-term funding by the EEID program."

Control measures in Nigeria

The largest Ebola outbreak to date is ongoing in West Africa, with more than 8,000 reported cases and 4,000 deaths. However, just 20 Ebola cases have been reported in Nigeria, with no new cases since early September.

All the cases in Nigeria stem from a single traveler returning from Liberia in July.

The study used epidemic modeling and computer simulations to project the size of the outbreak in Nigeria if control interventions had been implemented during various time periods after the initial case, and estimated how many cases had been prevented by the actual early interventions.

"This timely work demonstrates how computational simulations, informed by data from health care officials and the complex social web of contacts and activities, can be used to develop both preparedness plans and response scenarios," says Sylvia Spengler, program director in NSF's Directorate for Computer and Information Science and Engineering, which also supported the research.

Control measures implemented in Nigeria included holding all people showing Ebola symptoms in an isolation ward if they had had contact with the initial case. If Ebola was confirmed through testing, people diagnosed with the disease were moved to a treatment center.

Asymptomatic individuals were separated from those showing symptoms; those who tested negative without symptoms were discharged.

Those who tested negative but showed symptoms--fever, vomiting, sore throat and diarrhea--were observed and discharged after 21 days if they were then free of symptoms, while being kept apart from people who had tested positive.

Brief window of opportunity

Ebola transmission is dramatically influenced by how rapidly control measures are put into place.

"Actions taken by health authorities to contain the spread of disease sometimes can, perversely, spread it," says NSF-funded scientist Charles Perrings, also of ASU.

"In the Nigeria case, people who tested negative but had some of the symptoms were not put alongside others who tested positive," says Perrings. "So they had no incentive to flee, and their isolation did nothing to increase infection rates. Elsewhere in the region isolation policies have had a different effect."

The researchers found that the projected effect of control interventions in Nigeria ranged from 15-106 cases when interventions are put in place on day 3; 20-178 cases when implemented on day 10; 23-282 cases on day 20; 60-666 cases on day 30; 39-1,599 cases on day 40; and 93-2,771 on day 50.

The person who was initially infected generated 12 secondary cases in the first generation of the disease; five secondary cases were generated from those 12 in the second generation; and two secondary cases in the third generation.

That leads to a rough estimate of the reproduction number according to disease generation declining from 12 during the first generation, to approximately 0.4 during the second and third disease generations.

A reproductive number above 1.0 indicates that the disease has the potential to spread.

Recent estimates of the reproduction number for the ongoing Ebola epidemic in Sierra Leone and Liberia range between 1.5 and 2 (two new cases for each single case), indicating that the outbreak has yet to be brought under control.

The effectiveness of the Nigerian response, scientists say, is illustrated by a dramatic decrease in the number of secondary cases over time.

The success story for Nigeria, they maintain, sets a hopeful example for other countries, including the United States.

Wednesday, October 29, 2014

The Koyal Group Info Mag - Prototype Paper Test Can Detect Ebola Strains

DNA-programmed blotting paper could soon be giving doctors a simple disease test that will reveal an infection in 30 minutes for just a few pence.

Researchers have proved the technique works by developing a prototype Ebola test in just 12 hours, and using just $20 of materials.

The smart diagnostics use a soup of biological ingredients including the genetic material RNA.

The researchers say this can be freeze-dried and preserved on ordinary paper.

Team leader Jim Collins, who has joint appointments at Boston and Harvard Universities, says the biological powder can be reactivated by simply adding water, like living powdered soup.

"We were surprised at how well these materials worked after being freeze dried," he told the BBC.

"Once they're rehydrated, these biological circuits function in these small paper disks as if they were inside a living cell."

Genetic hacking
Jim Collins is a leading pioneer in the field of synthetic biology, whose 2000 paper showing genetic circuits could be created in the same way as electronic circuits can be programmed, helped launch the discipline.

Since then, synthetic biology has become a powerful tool in fundamental biology, with researchers hacking the genetic programmes of microbes to study their life processes, or give them the power to compute using logic like a digital processor.

Collins' group has previously reprogrammed bacteria to become cellular spies, recording events as they pass through an animal's bowels.

But the discipline has required specialist skills, so that few laboratories can take advantage of the techniques. The researchers' avowed intention in the new work, described in the journal Cell, is to make synthetic biology widely available.

They've definitely succeeded, says Professor Lingchong You, an expert in cellular reprogramming at Duke University.

"This paper-based approach is incredibly attractive. It feels like you could use it in your garage! It'll give scientists a synthetic-biology playground for a very low cost."

'Biochemical soup'
The materials in the powdered biochemical soup include simple enzymes that bacteria need, molecules to power the chemical reactions, amino acids which are the bricks of cell biology, and importantly ribosomes, giant molecular machines that read genetic material and use it to assemble the bricks into functioning proteins.

In liquid form, these cell extracts are routinely used in biology labs. Linchong You gives credit to Collins for having the imagination to freeze dry them with synthetic genes.

"With hindsight, it's obvious it should work. But most of us don't think in this direction - there was a real leap of faith. But the fact you can leave these freeze-dried systems for a year, and they'll still work - that's quite remarkable."

Alongside the paper-based biochemistry, Jim Collins' team - in collaboration with Peng Yin, also at Harvard University's Wyss Institute - has also introduced a new way of programming RNA, the molecular cousin of DNA which ribosome machines read. Their method makes the gene-circuits far more flexible than previous approaches.

The new type of RNA can be programmed to react and respond to any particular biochemical input, and then switch on the rest of the genetic machinery.

"This gives us a programmable sensor that can be readily and rapidly designed," Collins explains.

The Ebola test they experimented with is a proof of principle showing how flexible the programming step is.

"In a period of just 12 hours, two of my team managed to develop 24 sensors that would detect different regions of the Ebola genome, and discriminate between the Sudan and the Zaire strains."

In contrast, conventional antibody tests take months and cost thousands of pounds to devise, the researchers argue.

Quick response

The genetic test kit gives a simple colour output, turning the paper from yellow to purple, with the change visible within half an hour. By changing the input trigger, variants of the test could be used to reveal antibiotic resistance genes in bacterial infections or biomarkers of other disease conditions.

Their Ebola test is not suitable for use in the epidemic areas at the moment, Collins emphasises, but it would be simple to devise one that is.

The arrays of programmed paper dots would be easy to mass produce. Lingchong You envisions an "entire fabrication process carried out by computer-aided circuit design, robotics-mediated assembly of circuits, and printing onto paper."

And price is not the only consideration. Collins points out the freeze-dried circuits are stable at room temperature. In large parts of the world where electricity is unreliable, or there are no refrigerators, this would be a particular advantage.


"We are very excited about this," he added. "In terms of significance, I rank this alongside all the other breakthroughs I've been involved in."

Tuesday, October 28, 2014

The Koyal Group Info Mag: Professor with Irish roots wins Nobel for medicine

Professor John O'Keefe speaks at a news conference in London. Reuters


AN Irish-American scientist and a Norwegian husband-and-wife research team yesterday won the Nobel Prize in medicine for discovering the brain's navigation system - the inner GPS that helps us find our way in the world - a revelation that one day could help those with Alzheimer's.

The research by John O'Keefe, May-Britt Moser and Edvard Moser represents a "paradigm shift" in neuroscience that could help researchers understand the sometimes severe spatial memory loss associated with Alzheimer's disease, the Nobel Assembly said.

"This year's Nobel Laureates have discovered a positioning system, an 'inner GPS' in the brain, that makes it possible to orient ourselves in space," the assembly said.

O'Keefe (75), a professor at the University College London, discovered the first component of this system in 1971 when he found that a certain type of nerve cell was always activated when a rat was at a certain place in a room. He demonstrated that these "place cells" were building up a map of the environment, not just registering visual input.

Thirty-four years later, in 2005, May-Britt Moser and Edvard Moser, married neuroscientists at the Norwegian University of Science and Technology in Trondheim, identified another type of nerve cell - the "grid cell" - that generates a coordinate system for precise positioning and path-finding, the assembly said.

It was the fourth time that a married couple has shared a Nobel Prize and the second time in the medicine category.

Meanwhile, it was also announced yesterday that Professor O'Keefe is to receive an honorary doctorate from UCC. Professor O'Keefe, whose father hailed from Newmarket, Co Cork, and who still has family in the area, will also be guest speaker at a major symposium organised by UCC's Department of Anatomy and Neuroscience in advance of the ceremony.