We host news of discoveries in various fields of science with the focus on space, medical treatments, fringe science, microbiology, chemistry and physics, while providing commercial and cultural contexts and deeper insight. @http://koyalgroupinfomag.com/blog/

Monday, November 17, 2014

Koyal Info Mag History's first: Space probe lands on comet


Twenty years ago, a space mission to land a probe on a comet was envisioned. And now, ten years and 6 billion kms after, the Rosetta mission have seen success.

It's undoubtedly a resounding accomplishment for European Space Agency (ESA) which launched the Rosetta mission in March 2004. After blasting off from Kourou spaceport of French Guiana, Rosetta and Philae have logged at least 6 billion kms just to reach the comet.
For a time, the spacecraft had gone on standby for almost 3 years. Apparently, it had gone 500 million miles away from the sun that the solar panels can't absorb enough energy to recharge and keep things going. Fortunately, the Rosetta came out of its hibernation just this January and approached its target: comet 67P/Churyumov-Gerasimenko.

The comet, discovered in 1969, orbits the sun at the speed of 135,000 km/h. Measuring 4-km wide, the comet's shape is similar to that of a rubber duck which initially left people doubting if a landing would be possible at all. (This is because if the Philae lander touched down on uneven surface, it could turn over the wrong side -- and has no way to right itself up.)

After its separation from its carrier, Philae started its precarious  seven-hour descent onto the comet.

For the controller team, those were 7 hours of nerve-wracking anticipation, especially as a problem arose just at a critical moment. There seemed to be an issue with its thruster which could result in a rough landing at best. Their failure to amend the fault almost resulted in the cancellation of the mission. Eventually, they decided to proceed in spite of it.

Inside the control center of ESA  in Darmstadt, Germany, tensions were high.

Finally, around 4pm (GMT), Philae's communications reached Earth: touchdown.

“We are there. We are sitting on the surface. Philae is talking to us. We are on the comet,” said Stephan Ulamec, Philae's lander manager at the control room.

The space mission has already registered a number of firsts;  including the first spacecraft to come into close orbit around a comet and of course the first to land a probe. And if things go well, it could also be the first spacecraft to travel with a comet as it circles the sun.

ESA's director general, Jean-Jacques Dordain announced, "We are the first to do this, and that will stay forever."

During another  interview with Koyal Info Mag, Dordain noted, "This is a big step for human civilization."

After putting the Philae lander on ground, the Rosetta spacecraft is expected to orbit around the comet and take more images as well as collect various data as it travels toward the sun.

The Rosetta mission will be completed in December 2015, though if there's enough fuel in the spacecraft it might be given a 6-month extension to do even more daring projects. Philae itself has enough juice to continue working until March before its electronics get fried by the sun's heat. However, it might still continue to cling to the comet for around 6 years before losing its grip.

This daring space mission aims to further study the molecular and physical composition of a comet, believed to be made from materials that existed even before the solar system's creation. Koyal Info Mag hopes more will be known of how the solar system formed and how comets are instrumental to the life-sustaining qualities of a planet like Earth.

Sunday, November 16, 2014

The Koyal Group Info Mag: How turning science into a game rouses more public interest


Chris Lintott first met Kevin Schawinski in the summer of 2007 at the astrophysics department of the University of Oxford. Lintott had just finished a PhD at the University College of London on star formation in galaxies. He was also something of a minor celebrity in the astronomy community: he was one of the presenters of the BBC's astronomy programme The Sky at Night alongside Sir Patrick Moore, and had written a popular science book called Bang!: The Complete History of the Universe with Moore and Brian May, the Queen guitarist and astrophysicist. "I went to give a seminar talk as part of a job interview," Lintott recalls. "And this guy in a suit jumped up and started having a go at me because I hadn't checked my galaxy data properly. I thought it was some lecturer who I'd pissed off, but it turned out to be Kevin [Schawinski], who was a student at the time."

Most galaxies come in two shapes: elliptical or spiral. Elliptical galaxies can have a range of shapes, from perfectly spherical to a flattened rugby-ball shape. Spirals, like the Milky Way, have a central bulge of stars surrounded by a thin disk of stars shaped in a spiral pattern known as "arms". The shape of a galaxy is an imprint of its history and how it has interacted with other galaxies over billions of years of evolution. It is a mystery to astronomers why they have these shapes and how the two geometries related to one another. For a long time, astronomers assumed that spirals were young galaxies, with an abundance of stellar nurseries, where new stars were being formed. These regions typically emitted hot, blue radiation. Elliptical galaxies, on the other hand, were thought to be predominantly old, replete of dying stars, which are colder, and therefore have a red colour. Schawinski was working on a theory which contradicted this paradigm. To prove it, he needed to find elliptical galaxies with blue regions, where starformation was taking place.

At the time, astronomers relied on computer algorithms to filter datasets of images of galaxies. The biggest bank of such images came from the Sloan Digital Sky Survey, which contained more than two million astronomical objects, nearly a million of which were galaxies, and had been taken by an automated robotic telescope in New Mexico with a two-metre mirror. The problem was that computers can easily filter galaxies based on their colour, however it was impossible for an algorithm to pick up galaxies based on their shape. "It's really hard to teach a computer a pattern-recognition task like this," says Schawinski, currently a professor in astronomy at the Swiss Federal Institute of Technology in Zurich. "It took computer scientists a decade to [teach a computer] to tell human faces apart, something every child can do the moment they open their eyes." The only way to prove this theory, Schawinski decided, was to look at each galaxy image, one by one.

Schawinski did it for a week, working 12 hours every day. He would go to his office in the morning, click through images of galaxies while listening to music, break for lunch, and continue until late in the evening. "When I attended Chris's seminar, I had just spent a week looking through fifty thousand galaxies," says Schawinski.

When Lintott moved to Oxford, he and Schawinski started debating the problem of how to classify datasets with millions of images. They weren't the only ones. "Kate Land, one of my colleagues, was intrigued about a recent paper which claimed most galaxies were rotating around a common axis," Lintott says. "Which is indeed puzzling because the expectation was that these axes would be totally random." Land needed more data, which required looking at the rotation of tens of thousands of galaxies. "Out of the blue she asked me if I thought that, if they put a laptop with galaxy images in the middle of a pub, would people classify them?" Lintott recalls.

At the time, Nasa had launched a project called Stardust@home, which had recruited about 20,000 online volunteers to identify tracks made by interstellar dust in samples from a comet. "We thought that if people are going to look at dust tracks, then surely they'll look at galaxies," says Lintott. Once it was decided they would go ahead with the project, they built a website within days. The homepage displayed the image of a galaxy from the dataset. For each image, the volunteers were asked if the galaxy was a spiral or elliptical. If a spiral, they were asked if they could discern the direction of its arms and the direction of its rotation. There were also options for stars, unknown objects and overlapping galaxies.

Charlie Surbey and Liam Sharp
The site, called Galaxy Zoo, launched on July 11, 2007. "We thought we would get at least some amateur astronomers," Lintott says. "I was planning to go to the British Astronomical Society, give a talk and get at least 50 of their members to classify some galaxies for us." Within 24 hours of its launch, Galaxy Zoo was receiving 60,000 classifications per hour. "The cable we were using melted and we were offline for a while," Schawinski says. "The project nearly died there." After ten days, users from all over the world had submitted eight million classifications. By November, every galaxy had been seen by an average of 40 people. Galaxy Zoo users weren't just classifying galactic shapes, they were making unexpected discoveries. Barely a month after launch, Dutch schoolteacher Hanny van Arkel discovered a strange green cluster that turned out to be a never-before-seen astronomical object. Christened Hanny's Voorwerp ("voorwerp" means "object" in Dutch), it remains the subject of intense scientific scrutiny. Later that year, a team of volunteers compiled evidence for a new type of galaxy -- blue and compact -- which they named Pea galaxies.

"When we did a survey of our volunteers we found out they weren't astronomers," Lintott says. "They weren't even huge science fans and weren't that interested in making new discoveries. The majority said they just wanted to make a contribution." With Galaxy Zoo, Schawinski and Lintott developed a powerful pattern-recognition machine, composed entirely of people who could not only process data incredibly quickly and accurately -- aggregating the results via a democratic statistical process -- but also enable individual serendipitous discoveries, a fundamental component of scientific enquiry. With robotic telescopes spewing terabytes of images every year, they found an answer to big data in a big crowd of volunteers. Since Galaxy Zoo's first discoveries, this pioneering approach of crowdsourcing science has gained a strong following not only with the general public but also within the scientific community. Today, there are hundreds of crowdsourcing projects involving a variety of scientific goals, from identifying cancer cells in biological tissues to building nanoscale machines using DNA. These endeavours have resulted in breakthroughs, such as Schawinski and Lintott's discoveries on the subject of star formation, that have merited publication in the most reputed scientific journals. The biggest breakthrough, however, is not the scientific discoveries per se, but the method itself. Crowdsourcing science is a reinvention of the scientific method, a powerful new way of making discoveries and solving problems that could have otherwise remain undiscovered and unsolved.

At around the time Lintott and his team were developing Galaxy Zoo, two computer scientists at the University of Washington in Seattle, Seth Cooper and Adrien Treuille, were trying to use online crowds to solve a problem in biochemistry called protein folding.

A protein is a chain of smaller molecules called amino acids. Its three-dimensional shape determines how it interacts with other proteins and, consequently, its function in the cell. Proteins only have one possible structure, and finding that structure is a notoriously difficult problem: for a given chain of amino acids, there are millions of ways in which it can be folded into a three-dimensional shape. Biochemists know thousands of sequences of amino acids but struggle to find how they fold into the three-dimensional structures that are found in nature.

Cooper and Treuille's lab had previously developed an algorithm which attempted to predict these structures. The algorithm, named Rosetta, required a lot of computer power, so it was adapted to run as a screensaver that online volunteers could install. The screensaver, called Rosetta@home, required no input from volunteers, so Cooper and Treuille had been brought in to turn it into a game. "With the screensaver, users could see the protein and how the computer was trying to fold it, but they couldn't interact with it," Cooper says. "We wanted to combine that computer power with human problem-solving."

Cooper and Treuille were the only computer scientists in their lab. They also had no idea about protein folding. "In some sense, we were forced to look at this very esoteric and abstract problem through the eyes of a child," Cooper says. "Biochemists often tell you that a protein looks right or wrong. It seemed that with enough training you can gain an intuition about how a protein folds. There are certain configurations that a computer never samples, but a person can just look at it and say, 'that's it'. That was the seed of the idea."

The game, called Foldit, was released in May 2008. Players start with a partially-folded protein structure, which has been arrived at by the Rosetta algorithm, and have to manipulate its structure by clicking, pulling and dragging amino acids until they've arrived at its most stable shape. The algorithm calculates how stable the structure is; the more stable, the higher the score.

"When we first trialled the game with the biochemists, they weren't particularly excited," Cooper says. "But then we added a leaderboard, where you could see each other's names and respective scores. After that, we had to shut down the game for a while because it was bringing all science to a halt."


Foldit turned the goal of solving one of biochemistry's hardest problems into a game that can be won by scoring points. Over the past five years, over 350,000 people have played Foldit; these players have been able to consistently fold proteins better than the best algorithms. "Most of these players didn't have a background in biochemistry and they were beating some of the biochemists who were playing the game," Cooper says. "They also discovered an algorithm similar to one that the scientists had been developing. It was more efficient that any previously published algorithms."

Tuesday, November 11, 2014

The Koyal Group Info Mag: How A Failed Experiment On Rats Sparked A Billion-Dollar Infant-Care Breakthrough

Researchers studying massages on rat pups helped advance the science on neo-natal care for premature babies, and they will be awarded on Thursday for their breakthrough. | Les Stocker via Getty Images


WASHINGTON -- At a research lab at Duke University Department of Pharmacology in 1979, a group of scientists sparked a major breakthrough in infant care from a failed experiment on rats.
At the time, Dr. Saul Schanberg, a neuroscientist and physician, was running tests on just-born rats to measure growth-related markers (enzymes and hormones) in their brains. Together with Dr. Cynthia Kuhn and lab technician Gary Evoniuk, he kept getting weird results. With the rat pups separated from their mothers in order to run the experiments, their growth markers kept registering at low levels.

The team varied the trials. They used an anesthetized mother rat to feed the pups during and after the experimentation, and tried keeping the pups and mother in the same cage but with a divider to see if a lack of pheromones was the problem.

The experiment failed,” Kuhn recalled.

So the team approached it from another angle. Instead of stabilizing the rat pups so they could run tests, they tried to figure out what was wrong with the pups in the first place. From a friend, Kuhn had heard theories that massaging the pups could produce positive results. Evoniuk, meanwhile, had watched mother rats groom their pups by vigorously licking them. He proposed doing essentially the same thing, minus the tongue.
The team began using a wet brush to rub the rat pups at different pressure levels. Eventually, they found the right one, and on cue, the deprivation effect was reversed.

"I said, 'Let’s give it a shot,' and it worked the first time and the second time," recalled Evoniuk. "It was just the touch.”

Though they had no way of knowing it, Schanberg’s team had taken the first step in a process that would see the upending of conventional wisdom when it came to post-natal care. Three and a half decades later, the theories that his team stumbled upon by failure would save an estimated billions of dollars in medical costs and affect countless young parents’ lives.

On Thursday night, the team will be rewarded for its work. A coalition of business, university and scientific organizations will present the Golden Goose Award to them and other researchers with similar successful projects. It is a prize given for the purpose of shining a light on how research with odd-sounding origins (really, massaging rat pups?) can produce groundbreaking results. More broadly, it’s meant to showcase the importance of federally funded scientific research.

The work done by Schanberg’s team is inextricably tied to the support of taxpayers -- not just because the group operated from a grant of approximately $273,000 from the National Institutes of Health. As Kuhn and Evoniuk both argued, the breakthrough they were able to produce never could have happened with a private funding source. The demand for an immediate result or for profit wouldn’t have allowed them to pivot off the initial failure.

“It is not a straight path from point A to point B,” said Evoniuk. “There are all kinds of weird little detours. We were really following a detour from where this work started. The federal funding gave people like Saul the ability to follow their scientific instincts and try to find the answers to interesting questions that popped up.”

As Congress members head back to their districts before the midterm elections, fights over science funding appear to be low on the list of priorities. The two parties are in the midst of an informal truce, having put in place budget caps this past winter. And no one seems particularly eager to disrupt that truce, even if science advocates warn it needs upending.

While NIH's funding increased this year from last year, when sequestration forced an estimated $1.55 billion reduction, it still fell $714 million short of pre-sequestration levels. Adjusted for inflation, it was lower than every year but President George W. Bush's first year in office.

Surveying the climate, the American Academy for Arts & Science released a report this week showing that the United States "has slipped to tenth place" among economically advanced nations in overall research and development investment as a percentage of GDP. For science advocates, it was another sobering cause for alarm. Young researchers, they argue, are leaving the field or country. Projects that could yield tremendous biomedical breakthroughs aren't getting off the ground.

Looming over the Golden Goose awards ceremony is this reality: Would an experiment testing rat-pup massages ever survive this political climate? Would it be admonished as waste by deficit hawks in Congress?

“Researchers massaging rats sounds strange, but oddball science saves lives,” said Rep. Jim Cooper (D-Tenn.), who is participating in the awards ceremony. “In this instance, premature babies got a healthier start. If Congress abandons research funding, we could miss the next unexpected breakthrough.”
NIH funding was certainly critical to the successful research behind rat-pup massages. "Without the NIH none of this would have happened, zero," said Kuhn.

But serendipity also played a role. Not long after he made his discovery, Schanberg was at an NIH study section with Tiffany Field, a psychologist at the University of Miami School of Medicine. Field had also been doing research -- also funded by the NIH -- on massage therapies for prematurely born babies. But she was getting poor results.

"We were just sharing our data, basically," Field recalled of that conversation. "I was telling him we were having trouble getting any positive effects with the preemies. … He talked about how his lab technician had an eureka experiment when he saw his mother's tongue licking the babies."

The conclusion reached was that Field probably wasn't massaging the premature babies hard enough. Instead of applying "moderate pressure" (as Schanberg had been doing) she was applying more of a "soft stroking."

A study done on rats became a study on humans. Field changed up her experiment and began to see results right away. Instead of the discomfort felt from that tickle-like sensation, the moderate pressure had a tonic effect, stimulating receptors. Babies' heart rates slowed down; the preemies seemed more relaxed; they were able to absorb food and gain weight; there was more evidence of growth hormone; an increase in insulin; greater bone density; and greater movement of the GI tract. The magnitude of the finding was enormous.

"We published the data and we actually did a cost-benefit analysis at that point and determined we could save $4.8 billion per year by massaging all the preemies, because of all the significant cost savings for the hospital," Field recalled.

Her conclusion challenged the prevailing sentiment of the time that prematurely born babies should be left in incubators, fed intravenously, and not touched immediately after birth lest they become agitated and potentially harmed. But few people listened.

"The only person who paid attention to it was Hillary Clinton," she recalled, noting that Clinton, who was working on a health care reform initiative as First Lady, expressed interest in the research.

Since then, however, conceptions of post-natal care have changed. Subsequent studies have confirmed Field's findings, though others have questioned whether there is enough research or the proper methodology to draw sweeping conclusions. Nevertheless, whereas few people used massage therapies in the '80s and '90s, as of eight years ago 38 percent of natal care units were using those therapies, said Fields. The method is estimated to save $10,000 per infant -- roughly $4.7 billion a year.

Those involved in the research still marvel that the chain of events started with a failed experiment on rats and turned on a fortuitous meeting between two scientists.

"We didn’t set out to figure out how to improve nursing care," said Kuhn. "But we wound up saving a lot of money and helped babies grow better, their cognitive outcome was better, they got out of the [intensive care units] sooner. … There was no downside."

"One thing led to another," said Evoniuk. "We were just kind of following an interesting question not thinking we were going to change medical practice."

Schanberg won't be around to receive his Golden Goose award Thursday night. He died in 2009, and his granddaughter will accept on his behalf. But those who worked with him say that his research remains a testament to the good results that an inquisitive mind and a respectable funding stream can produce. It's a story that scientists may find uplifting.

But it doesn't necessarily have a happy ending.

In the aftermath of her work with Schanberg, Field continued studying natal care, starting the Touch Research Institute at the University of Miami in 1992 with the help from the NIH and Johnson & Johnson. Her work has been widely cited in medical journals and newspaper articles. But the funding streams have run dry, and now she's faced with the prospect of dramatically narrowing the scope of her lifelong work.

"We are faced with having to close the institute because we don’t have any NIH grants," she said. "It used to be a third of us would get the grants. Now they are funding at something like the seventh percentile."

Friday, November 7, 2014

The Koyal Group Info Mag: A glimpse into the inner workings of the 3D brain


- Scientists at the Mercator Research Group creates the new models
- It lets experts make artificial networks of nerve endings in the hippocampus - on a computer screen so they can explore how memories form
- The hippocampus is thought to be one of the oldest regions of the brain
- Scientists are monitoring the way neural signals spread throughout the network time-wise using their new tool
- In the furure they hope to show how animals memorise food and dangers

The way neurons are interconnected in the brain is complicated.

But now scientists have created a new type of computer model to make artificial networks of nerve cells found in the hippocampus region of the brain.

The hippocampus helps us form personal memories, and it is hoped the tool will shed more light on how these memories develop as they move through the region's different structures.


Scientists have created a new type of computer model to make artificial networks of nerve cells in the hippocampus part of the brain. A model of a rat's hippocampus is pictured, with different colours denoting different regions. It is hoped the tool will shed more light on how the hippocampus forms memories

Scientists will also explore how the structure connects to the the brain, and which information arrives where and when, using models.

The model has been created by Dr Martin Pyka and his colleagues from the Mercator Research Group in Germany.

Dr Pyka developed a method that allows the brain's anatomic data and neurons to be reconstructed as a 3D model.

Once built, this 3D model can be manipulated on a computer.


The hippocampus enables humans to navigate space securely and to form personal memories. The region is seahorse shaped and is shaded in red in this illustration


Researchers from the Mercator Research Group in Germany, developed the method that means the brain can be constructed as a 3D model, and can be manipulated on a computer (pictured). Structures that form a rat's hippocampus, including CA1, CA3, subiculum and entorhinal cortex are pictured in blue, red, yellow and green

They claim that their approach is unique because it enables automatic calculation of the neural interconnection based on their position inside the space.

Scientists can generate feasible network structures more easily than using other tools.

They are using the models to monitor the way neural signals spread throughout the network time-wise, according to the study published in the journal Frontiers in Neuroanatomy.

Dr Pyka has, so far, found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies.

In future, this method may help us understand how animals, for example, combine various information to form memories within the hippocampus, in order to memorise food sources or dangers and to remember them in certain situations.

The researchers have so far shown off a model of a rat’s hippocampus including its different layers such as the CA1 and CA3 regions, the subiculum and entorhinal cortex.


Dr Pyka has so far found evidence that the hippocampus' form and size could explain why neurons in those networks fire in certain frequencies. Neurons in a mouse hippocampus are pictured



Wednesday, November 5, 2014

The Koyal Group Info Mag: Scientists Explain Why Record-High Antarctic Sea Ice



The Antarctic has been brimming with sea ice in 2014, hovering around record-high extent for most of the year. On September 20, it peaked at 7.78 million square miles, breaking the previous Antarctic sea ice extent record, which was set in 2013. But while it might seem that this record throws into question the validity of global warming, scientists say this just isn’t the case.

Up north, the sea ice loss in the Arctic is exactly what you would expect in a warming world. 2012 was a record-low year for Arctic sea ice in the satellite record, and 2014 was the sixth lowest.

But in the Antarctic, sea ice has been growing year over year. This is in contrast to the ice on Antarctic land, which appears to be shrinking faster than previously estimated. The dichotomy suggests that something much more complicated than “warming equals melting” is occurring, and scientists say that more research is needed to understand the observations.

One possible explanation for the growing sea ice extent is that weather patterns around the Antarctic may be pushing more cold, continental air over the ocean, spurring the formation of more sea ice.

Another possibility actually includes the melting of the ice over Antarctica itself – as the land ice melts, it could be flushing large amounts of fresh water, which freezes faster, into the ocean.

“There hasn’t been one explanation yet that I’d say has become a consensus, where people say, ‘We’ve nailed it, this is why it’s happening,’” said Claire Parkinson, a senior scientist at NASA’s Goddard Space Flight Center. “Our models are improving, but they’re far from perfect. One by one, scientists are figuring out that particular variables are more important than we thought years ago, and one by one those variables are getting incorporated into the models.”

When it comes to comparing the Arctic and the Antarctic, it’s probably best not to draw conclusions from their differences. “Some people have looked at the Antarctic increasing trend and use that to suggest that global warming isn’t happening, or that the increase in the Antarctic is offsetting the decrease in the Arctic and that’s simply not true,” says Walt Meier, a research scientist at the NASA Goddard Space Flight Center, in a video on the record extent. “If you look at the magnitudes of the changes we’re seeing in the wintertime, the Arctic is decreasing about twice as fast as the Antarctic is increasing.”

This year’s record extent will likely prompt new research on the difference between the North and South poles, and for good reason, according to Meier. “They’re more sensitive to climate change, and therefore they’re the regions that we see the biggest effects of climate change so far,” says Meier. “They’re kind of the canary in a coal mine of global warming.”


For more science news from The Koyal Group Info Mag, visit our facebook page and follow us on twitter @koyalgroup

Tuesday, November 4, 2014

The future is disappearing: How humanity is falling short of its grand technological promise by The Koyal Group Info Mag

What I find most interesting about typical visions of the future isn’t all the fanciful and borderline magical technology that hasn’t been invented yet, but rather how much of it actually already exists.

Consider something relatively straightforward, like a multi-touch interface on your closet door that allows you to easily browse and experiment with your wardrobe, offering suggestions based on prior behavior, your upcoming schedule and the weather in the locations where you are expected throughout the day. Or a car that, as it makes real-time navigational adjustments in order to compensate for traffic anomalies, also lets your co-workers know that you will be a few minutes late, and even takes the liberty of postponing the delivery of your regular triple-shot, lactose-free, synthetic vegan latte. There’s very little about these types of scenarios that isn’t entirely possible right now using technology that either already exists, or that could be developed relatively easily. So if the future is possible today, why is it still the future?

I believe there are two primary reasons. The first is a decidedly inconvenient fact that futurists, pundits and science fiction writers have a tendency to ignore: Technology isn’t so much about what’s possible as it is about what’s profitable. The primary reason we haven’t landed a human on Mars yet has less to do with the technical challenges of the undertaking, and far more to do with the costs associated with solving them. And the only reason the entire sum of human knowledge and scientific, artistic and cultural endeavor isn’t instantly available at every single person’s fingertips anywhere on the planet isn’t because we can’t figure out how to do it; it’s because we haven’t yet figured out the business models to support it. Technology and economics are so tightly intertwined, in fact, that it hardly even makes sense to consider them in isolation.

The second reason is the seemingly perpetual refusal of devices to play together nicely, or interoperate. Considering how much we still depend on sneakernets, cables and email attachments for something as simple as data dissemination, it will probably be a while before every single one of our devices is perpetually harmonized in a ceaseless chorus of digital kumbaya. Before our computers, phones, tablets, jewelry, accessories, appliances, cars, medical sensors, etc., can come together to form our own personal Voltrons, they all have to be able to detect each other’s presence, speak the same languages, and leverage the same services.

The two reasons I’ve just described as to why the future remains as such — profit motive and device isolation — are obviously not entirely unrelated. In fact, they could be considered two sides of the same Bitcoin. However, there’s still value in examining each individually before bringing them together into a unified theory of technological evolution.

Profitable, Not Possible

Even though manufacturing and distribution costs continue to come down, bringing a new and innovative product to market is still both expensive and surprisingly scary for publicly traded and historically risk-adverse companies. Setting aside the occasional massively disruptive invention, the result is that the present continues to look suspiciously like a slightly enhanced or rehashed version of the past, rather than an entirely reimagined future.

This dynamic is something we have mostly come to accept as a tenet of our present technology, but conveniently disregard when contemplating the world of tomorrow. Inherent in our collective expectations of what lies ahead seems to be an emboldened corporate culture that has grown weary of conservative product iteration; R&D budgets unencumbered by intellectual property squabbles, investor demands, executive bonuses and golden parachutes; and massive investment in public infrastructure by municipalities that seem constantly on the verge of complete financial collapse – none of which, as we all know, are particularly reminiscent of the world we actually live in.

One of the staples of our collective vision of the future is various forms of implants: neurological enhancements to make us smarter, muscular augmentation to make us stronger, and subcutaneous sensors and transmitters to allow us to better integrate with and adapt to our environments. With every ocular implant that enables the blind to sense more light and higher resolution imagery; with every amputee who regains some independence through a fully articulated prosthetic; and with every rhesus monkey who learns to feed herself by controlling a robotic arm through a brain-computer interface, humanity seems to be nudging itself ever-closer to its cybernetic destiny.

There’s no doubt in my mind that it is possible to continue implanting electronics inside of humans, and organics inside of machines, until both parties eventually emerge as new and exponentially more capable species. However, what I’m not sure of yet is who will pay for all of it outside of research laboratories. Many medical procedures don’t seem to be enjoying the same trends toward availability and affordability as manufacturing processes, and as far as I can tell, insurance companies aren’t exactly becoming increasingly lavish or generous. As someone who is fortunate enough to have reasonably good benefits, but who still thinks long and hard about going to any kind of a doctor for any reason whatsoever due to perpetually increasing copays and deductibles (and perpetually decreasing quality of care), I can’t help regarding our future cybernetic selves with a touch of skepticism. The extent to which the common man will merge with machines in the foreseeable future will be influenced as much by economics and policy as by technological and medical breakthroughs. After all, almost a decade ago researchers had a vaccine that was 100 percent effective in preventing Ebola in monkeys, but until now, the profit motive wasn’t there to develop it further.

Let’s consider a more familiar and concrete data point: air travel. Growing up just a few miles from Dulles Airport outside of Washington, D.C., my friends and I frequently looked up to behold the sublime, delta-wing form of the Concorde as it passed overhead. I remember thinking that if one of the very first supersonic passenger jets entered service only three years after I was born, surely by the time I grew up (and assuming the better part of the planet hadn’t been destroyed by a nuclear holocaust unleashed by itchy trigger fingers in the United States or Soviet Union), surely all consumer air travel would be supersonic. Thirty-eight years after the Concorde was introduced — and 11 years after the retirement of the entire fleet — I think it’s fair to say that air travel has not only failed to advance from the perspective of passengers, but unless you can afford a first- or business-class ticket, it has in fact gotten significantly worse.

It would be unfair of me not to acknowledge that many of us do enjoy in-flight access to dozens of cable channels through a primitive LCD touchscreen (which encourages passengers behind us to constantly poke at our seats, rudely dispelling any hope whatsoever of napping) as well as email-grade Wi-Fi (as opposed to a streaming-media-grade Internet connection), but somehow I’d hoped for a little more than the Food Network and the ability to send a tweet at 35,000 feet about how cool it is that I can send a tweet at 35,000 feet.

Novelty Is Not Progress

I’ve come to the conclusion over the last few years that it’s far too easy to confuse novelty with technological and cultural progress, and nothing in my lifetime has made that more clear than smartphones. It used to be that computers and devices were platforms — hardware and software stacks on top of which third-party solutions were meant to be built. Now, many devices and platforms are becoming much more like appliances, and applications feel more like marginally tolerated, value-add extensions. In some ways, this is a positive evolution, since appliances are generally things that all of us have, depend on, know how to use, and are relatively reasonably priced. But let’s consider a few other attributes of appliances: They typically only do what their manufacture intends; they are the very paragons of planned obsolescence; and they generally operate either entirely in isolation, or are typically only compatible with hardware or services from the same manufacturer.

Admittedly, comparing a smartphone to a blender or a coffee maker isn’t entirely fair since our phones and tablets are obviously far more versatile. In fact, every time I adjust my Nest thermostat with whatever device happens to be in my pocket, or use Shazam to sample an ambient track in a coffee shop, or search for a restaurant in an unfamiliar city and have my phone (or my watch) take me directly to it, I’m reminded that several conveniences and miracles of the future have managed to thoroughly permeate the present. But one of the tricks I’ve learned for evaluating current technologies is to consider it in the broader context of what I want the future to be. And when I contemplate the kind of future I think most of us want — one in which all our devices interoperate, and consumers have full control over the services those devices support and consume (but more on that in a moment) — there’s a lot about modern smartphones, tablets and the direction of computing in general to be very concerned about.

The reality is that novelty, and both technological and cultural progress, are only loosely related. Novelty is usually about interesting, creative or fun new products and services. It’s about iterative progress like eking out a few more minutes of battery life, or shaving off fractions of millimeters or grams, or introducing new colors or alternating between beveled and rounded edges. But true technological and cultural progress is about something much bigger and far more profound: the integration of disparate technologies and services into solutions that are far greater than the sum of their parts.

Progress is about increasing access to information and media as opposed to imposing artificial restrictions and draconian policies; it’s about empowering the world to do more than just shop more conveniently, or inadvertently disclose more highly targetable bits of personal information; it’s about trusting your customers to do the right thing, providing real and tangible value, and holding yourself accountable by giving all the stakeholders in your business the ability to walk away at any moment. And it’s about sometimes taking on a challenge not only for the promise of financial reward, but simply to see if it can be done, or because you happen to be in a unique position to do so, or because humanity will be the richer for it.

I know I’m probably coming across as a postmodern hippie here, but it’s these kinds of idealistic, and possibly even overambitious, aspirations that should be guiding us toward our collective future — even if we know that it isn’t fully attainable.

I want to be able to use my phone to start, debug and monitor my car and my motorcycle. I want the NFC chip in my phone to automatically unlock my workstations as I approach them — regardless of which operating systems I choose to use. I want to be able to pick which payment system my phone defaults to based on who provides the terms and security practices I’m most comfortable with. I want instant access to every piece of digital media on the planet on any device at any time (and I’m more than willing to pay a fair price for it). I want all my devices to integrate, federate and seamlessly collaborate, sharing bandwidth and sensor input, combining themselves like an array of radio telescopes into something bigger and more powerful than what each one represents individually. I want to pick and choose from dozens of different services for connectivity, telephony, media, payments, news, messaging, social networking, geolocation, authentication and every other service that exists now and that will exist tomorrow. I want to pick the PC, phone, tablet, set-top box, watch, eyewear and [insert nonspecific connected device here] that I like best, and be assured that they will all integrate on a deep level, rather than feeling like I’m constantly being penalized for daring to cross the sacred ecosystem barrier. I want a future limited only by what’s possible rather than by intellectual property disputes, petty corporate feuds, service contracts, shareholder value and artificial lock-in.

And more than anything else, I want a future that is as much about making us intellectually and culturally rich as it is about material wealth.

Free as in Speech

Although we are very clearly living in a time (and headed for a future) that is determined as much by what is profitable as what is possible, it’s important to acknowledge that there are plenty of inspiring exceptions. While it’s undeniable that the U.S. space program has recently fallen upon some difficult times (relying on the Russians to ferry astronauts to and from the ISS sure seemed like a good idea at the time), there’s nothing like watching robots conduct scientific experiments on Mars, or reading about the atmospheric composition of exoplanets, to put NASA’s spectacular portfolio of accomplishments into perspective; starting as early as the late ’60s, academics, engineers, computer scientists and the Department of Defense all came together around the concept of interoperability, which ultimately led to the creation of the Internet and the World Wide Web — possibly two of the most politically, culturally and economically important and disruptive inventions in human history; and then there are collaborative resources like Wikipedia; open-source software projects like Linux, the various Apache projects, Bitcoin and Android; open hardware projects like Arduino, WikiHouse and the Hyperloop project; free and open access to GPS signals; and the myriad of incredibly creative crowd-funded Kickstarter projects that seem to make the rounds weekly.

The reality of technology — and perhaps the reality of most things complex, interesting and rewarding enough to hold our collective attention — is that it is not governed by absolutes, but rather manifests itself as the aggregate of multiple and often competing dynamics. I’ve come to think of technology as kind of like the weather: It is somewhat predictable up to a point, and there are clearly patterns from which we can derive assumptions, but ultimately there are so many variables at play that the only way to know for sure what’s going to happen is to wait and see.

But there is one key way in which technology is not like the weather: We can control it. One of my favorite quotes is by the famous computer scientist Alan Kay who once observed that the best way to predict the future is to invent it. If we want to see a future in which devices freely interoperate, and consumers have choices as to what they do with those devices and the services they connect to, it is up to us to both demand and create it. If we choose instead to remain complicit, we will get a future concerned much more with maximizing profits than human potential. Clearly we need to strike the right balance.

Insofar as technology is a manifestation of our creative expression, it is not unlike free speech. And like free speech, we don’t have to always like or agree with what people choose to do with it, but we do have a collective and uncompromising responsibility to protect it.

The Koyal Group Info Mag Nasa is Funding Research on Deep Sleep for Transporting Astronauts to Mars


Putting space travelers into a state of deep sleep has been a staple of interstellar science fiction for quite some time, but despite originating as a far-fetched concept, the idea of using suspended animation to enable deep space travel might soon become science fact.

If you’re unfamiliar with the concept, here’s a quick rundown. Traveling far into space is a tricky endeavor. With existing technology, traveling to a planet like Mars takes about 180 days, for example. Keeping a crew of people alive (and entertained) in space for that long isn’t hard, but it does require a lot of food, water, energy, and other supplies. This makes m

anned long-distance space travel extremely expensive, since hauling more supplies requires huge amounts of storage space, and thousands of additional dollars just to get it all that stuff into orbit.

In theory, suspended animation would help solve this problem. If astronauts could be placed in a deep sleep during the journey, they would require far fewer resources along the way. Instead, they could just be put to sleep at the beginning and woken back up when they arrive at their destination.

Now, with a manned mission to Mars likely in its sights, NASA has begun to explore the viability of such an idea, and has recently funded a study by Atlanta-based aerospace engineering firm SpaceWorks Enterprises to help work out the kinks in the process.
Related: Cryostasis isn’t sci-fi: surgeons will soon use suspended animation to revive gunshot victims

The bulk of the study revolves around placing humans in torpor — a state in which metabolic and physiological activity is drastically slowed down. To do this, the company has developed a three-stage system. Step one involves sedating the person and using a neuromuscular blockade to prevent movement, whereas step two is to physically lower the person’s body temperature by about 10 degrees farenheit, thereby reducing cellular activity and metabolic rate by around 50 to 70 percent. This is achieved with the help of cooling pads and a nasally-inhaled coolant that lowers the subject’s temperature from the inside out. Then, once in torpor, the subject is hooked into an intravenous drip that supplies their body with all the nutrients needed to keep them alive.

Using these methods, SpaceWorks has reportedly managed to keep a person in stasis for a week — an impressive feat, but even so, there’s still much work to be done before the technology is ready for primetime. In addition to extending the length of the stasis period, the company has a handful of other hurdles to overcome. The potential onset of pneumonia, muscle atrophy, and bone loss have yet to be addressed; and the long term-effects of stasis on human organs is still largely unknown. SpaceWorks still has a long road ahead of it, but with a few more years of research, it’s not unreasonable to think that suspended animation, cryostasis, torpor –whatever you want to call it– might finally bring a manned mission to Mars within reach.