The Americas’ Oldest Human Remains Lost in Brazil Museum Fire

Your daily selection of the latest science news!

According to (This article and its images were originally posted on Live Science September 4, 2018 at 09:45AM.)

(Cover Image)

This drone view shows Rio de Janeiro’s 200-year-old National Museum, on Sept. 3, 2018, a day after a massive fire ripped through the building.

Credit: Mauro Pimentel/AFP/Getty Images

A large fire destroyed Brazil’s museum Sunday (Sept. 2), ruining one of Latin America’s most venerable cultural and research insitutions and the 200-year-old home of more than 20 million artifacts, according to its website.

 
No one has been reported injured or killed in the blaze itself, but a number of priceless artifacts are believed to have been destroyed, according to CNN. The most famous of those artifacts was Luzia, the 11,000-year-old skull of a Paleoindian woman whose remains are the earliest discovered in the Americas. A number of irreplacable artworks and Egyptian mummies are also believed lost, though a full accounting is not yet possible, since investigators have yet to enter the building, according to The Guardian. [Photos: The Monkeys of Brazil’s Atlantic Forest]

|

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com.

To see more posts like these; please subscribe to our newsletter. By entering a valid email, you’ll receive top trending reports delivered to your inbox.
__

This article and its images were originally posted on [Live Science] September 4, 2018 at 09:45AM. All credit to both the author Rafi Letzter and Live Science | ESIST.T>G>S Recommended Articles Of The Day.

 

Donations are appreciated and go directly to supporting ESIST.Tech. Thank you in advance for helping us to continue to be a part of your online entertainment!

 

 

 

Eating bone marrow played a key role in the evolution of the human hand

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories (This article and its images were originally posted on Phys.org – latest science and technology news stories July 11, 2018 at 11:33AM.)

Student using stone tool. Credit: Erin Marie Williams-Hatala

The strength required to access the high calorie content of bone marrow may have played a key role in the evolution of the human hand and explain why primates hands are not like ours, research at the University of Kent has found.

In an article in The Journal of Human Evolution, a team lead by Professor Tracy Kivell of Kent’s School of Anthropology and Conservation concludes that although stone making has always been considered a key influence on the evolution of the human , accessing generally has not.

It is widely accepted that the unique dexterity of the human hand evolved, at least in part, in response to stone tool use during our evolutionary history.

Archaeological evidence suggests that early hominins participated in a variety of tool-related activities, such as nut-cracking, cutting flesh, smashing bone to access marrow, as well as making . However, it is unlikely that all these behaviours equally influenced modern human hand anatomy.

To understand the impact these different actions may have had on the evolution of human hands, researchers measured the force experienced by the hand of 39 individuals during different stone tool behaviours—nut-cracking, marrow acquisition with a hammerstone, flake production with a hammerstone, and handaxe and stone tool (i.e. a flake) – to see which digits were most important for manipulating the tool.

They found that the pressures varied across the different behaviours, with nut-cracking generally requiring the lowest pressure while making the flake and accessing marrow required the greatest pressures. Across all of the different behaviours, the thumb, index finger and middle finger were always most important.

Professor Kivell says this suggests that nut-cracking force may not be high enough to elicit changes in the formation of the human hand, which may be why other primates are adept nut-crackers without having a human-like hand.

In contrast, making stone flakes and accessing marrow may have been key influences on our hand anatomy due to the high stress they cause on our hands. The researchers concluded that eating marrow, given its additional benefit of high calorific value, may have also played a key role in evolution of human dexterity.

The manual pressures of tool behaviors and their implications for the of the human hand by Erin Marie Williams-Hatala, Kevin G. Hatala, McKenzie Gordon and Margaret Kasper, all Chatham University, Pittsburgh, USA and Alastair Key and Tracy Kivell, University of Kent is published in the Journal of Human Evolution.

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and its images were originally posted on [Phys.org – latest science and technology news stories] July 11, 2018 at 11:33AM. All credit to both the author  and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

4 Boys Rescued from Thai Cave During Risky Dive Mission

Your daily selection of the latest science news!

According to Live Science (This article and its images were originally posted on Live Science July 8, 2018 at 12:53AM.)


Rescue workers are seen at the Tham Luang cave area on July 8, 2018; this morning, divers entered the cave complex on a risky mission to extract the team, one by one.

Credit: LILLIAN SUWANRUMPHA/AFP/Getty Images

About 18 divers entered the cave in Chiang Rai, Thailand, Sunday morning (July 8), where 12 boys and their soccer coach have been trapped for two weeks, according to news reports.

 
Though many had said they considered a diving rescue a last resort, as the boys have no diving experience and some were malnourished and experiencing exhaustion from their time in the cave, rain began falling in the area on Saturday. Officials were concerned that monsoon rains, which were forecast for today, would make such a rescue essentially impossible.

 
“If we don’t start now, we might lose the chance,” Chiang Rai acting Gov. Narongsak Osatanakorn said, according to news reports. As water levels rise in the cave, the distance the boys would have to dive increases. [The Very Real Risks of Rescuing the Boys Trapped in a Thai Cave]

 
The boys and their coach hiked into the Tham Luang cave complex when it was relatively dry, only to be walled in after monsoon rains triggered a flash flood.

 
This past week, water levels have been declining in the cave, as the rain has held off and officials have continued to pump water out of the cave system.

 
“The shorter the dive distance, the increased margin of safety,” George Veni, executive director of the National Cave and Karst Research Institute and president of the International Union of Speleology, told Live Science. “Also, air bells may develop along the way to create a series of two or more shorter dives instead of one long dive,” Veni said, adding that “lower water levels means the force of the water is less.” [Photos: Rescuers Race Against Time to Save Soccer Team Trapped in Thai Cave]

 
One of the big concerns with cave diving is the violently flowing water that can make a short dive risky for even an expert, said Edd Sorenson, a regional coordinator in Florida for the nonprofit International Underwater Cave Rescue and Recovery. (Sorenson is also the safety officer for the National Speleological Society-Cave Diving Section.)

 
The Thai Navy SEALS were teaching the trapped soccer team the basics of cave diving, but as recently as Friday, Gov. Osatanakorn said the kids were not adequately trained to make the risky dive out.

 
The team is reportedly holed up in a chamber about 2.5 miles (4 kilometers) into the cave, with experienced divers taking about 11 hours up and back during delivery missions over the past week.

 
Diving in caves is very risky; it’s very unforgiving. If something goes wrong, you can’t go up for air,” Veni told Live Science earlier in the week. “In case of an emergency, you may have to swim underwater for 10 minutes and do some underwater gymnastics to get through a narrow space and get up to air.”

 
Veni added, “You’re in total darkness; essentially, you’re swimming through mud.”

 
Each of the boys will be paired up with two trained divers, and it will take at least 11 hours for the first person to be brought out.

 
This is an ongoing story, and Live Science will continue to update this article as news comes in on the rescue mission.

Continue reading…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. To see more posts like this please subscribe to our newsletter by entering your email. By subscribing you’ll receive the top trending news delivered to your inbox.
    __

This article and its images were originally posted on [Live Science] July 8, 2018 at 12:53AM. All credit to both the author  Jeanna Bryner and Live Science | ESIST.T>G>S Recommended Articles Of The Day.

 

 

 

This 3,000-Year-Old Horse Got a Human-Style Burial

Your daily selection of the latest science news!

According to Live Science

Cover Image: Discovered in 2011, the ancient remains of a chariot-pulling horse were found in a tomb more than five feet underground.

Credit: Schrader et al./Antiquity Journal, doi.org/10.15184/aqy.2017.239

 

More than 3,000 years ago in the Nile River Valley, a body was carefully prepared for ceremonial burial. It was wrapped in a shroud and placed in a tomb, surrounded by important objects that demonstrated its elevated status.

 
The mourners probably had long faces as they sent their loved one to an eternal rest.

 
But the longest face of all likely belonged to the grave’s occupant — a chariot-pulling horse, who was important enough to merit an ornate burial typically reserved for high-ranking people.

 
Scientists first unearthed the horse in 2011 in Tombos, a site located in the Nile Valley in what is now Sudan. The skeleton dates to around 949 B.C., and it is thought to be the most compete horse skeleton from that period ever found, according to a new study describing the grave and its contents, published online April 25 in Antiquity Journal. [Ancient Nubia: A Brief History]

 
The ancient Egyptians established Tombos around 1450 B.C. as a foreign outpost in the rival kingdom of Nubia. The city later emerged as an important Nubian community after withdrawing from Egyptian rule. Artifacts unearthed from archaeological sites in Tombos reveal much about the influence of Egyptian culture, as well as serve to illuminate aspects of daily life that were distinctly Nubian, the scientists wrote in the study.

 
When the site was first excavated, archaeologists found a tomb complex with a chapel and pyramid aboveground, and a shaft leading to multiple chambers underground — a design typically associated with “elite” pyramid tombs, according to the study. The four burial chambers contained human remains from around 200 people representing several generations, along with pottery, tools and decorative objects.

 
However, the tomb held very few animal remains, and finding such a well-preserved horse — in the shaft underneath the chapel, at a depth of about 5 feet (1.6 meters) — surprised the scientists, study co-author Michelle Buzon, a bioarchaeologist in the Department of Anthropology at Purdue University, said in a statement.

 
“It was clear that the horse was an intentional burial, which was super fascinating,” Buzon said.

 

Visit Source to view media.

The tomb holding the horse’s skeleton had multiple chambers containing artifacts and additional remains belonging to 200 people.

Credit: Schrader et al./Antiquity Journal, doi.org/10.15184/aqy.2017.239

 
Bits of chestnut fur with white markings still clung to the animal’s lower hind legs, and the researchers found decayed remnants of a shroud that helped them to date the burial to between 1,005 and 893 B.C., they wrote in the study. The tomb shaft around the skeleton also revealed other artifacts that hinted at the horse’s status, including a carved scarab beetle and a piece of iron — likely once part of the animal’s bridle — that is the oldest example of iron unearthed in Africa.

 
After examining the horse’s teeth and bones, the scientists determined that the animal was a mare that died when it was between 12 and 15 years old. Further analysis of the skeleton showed that it led an active life, and signs of stress in its ribs and spine hinted that it wore a harness for pulling a chariot. However, its age at the time of death indicates that the animal was cared for and valued by its owner during its lifetime, the study authors reported.

 
A tomb burial for the horse suggests that the animal probably played a significant role in its owner’s household, and was more than a mere beast of burden, while the iron bridle piece found in the tomb — an expensive and rare item that would have been made specifically for the horse — further helps to establish its elevated status, according to the study.

 
While formal burials for horses were rare at the time, they later became more commonplace in Nubian and Egyptian society, around 728 to 657 B.C. But the attention to detail in this burial and the reverence shown suggest that horses may have already achieved a symbolic representation of wealth and power for Nubian people, and could have played a more important role in Nubian culture — in life and in death — than has been previously suspected, the researchers reported.

Read more…

  • Got any news, tips or want to contact us directly? Feel free to email us: esistme@gmail.com. Also subscribe now to receive daily or weekly posts.
    __

This article and images were originally posted on [Live Science] April 26, 2018 at 06:48PM. Credit to Author and Live Science | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Researchers Find New DNA Structure in Living Human Cells

Your daily selection of the latest science news!

According to Breaking Science News

A team of scientists from the Garvan Institute of Medical Research and the Universities of New South Wales and Sydney has identified a new DNA structure — called the intercalated motif (i-motif) — inside living human cells.

A twisted ‘knot’ of DNA, the i-motif has never before been directly seen inside living cells. Image credit: Zeraati et al, doi: 10.1038/s41557-018-0046-3.

Deep inside the cells in our body lies our DNA. The information in the DNA code — all 6 billion A, C, G and T letters — provides precise instructions for how our bodies are built, and how they work.

The iconic ‘double helix’ shape of DNA has captured the public imagination since 1953, when James Watson and Francis Crick famously uncovered the structure of DNA.

However, it’s now known that short stretches of DNA can exist in other shapes, in the laboratory at least — and scientists suspect that these different shapes might play an important role in how and when the DNA code is ‘read.’

“When most of us think of DNA, we think of the double helix. This research reminds us that totally different DNA structures exist — and could well be important for our cells,” said co-lead author Dr. Daniel Christ, from the Kinghorn Centre for Clinical Genomics at the Garvan Institute of Medical Research and St Vincent’s Clinical School at the University of New South Wales.

“The i-motif is a four-stranded ‘knot’ of DNA,” added co-lead author Dr. Marcel Dinger, also from the Garvan Institute of Medical Research and the University of New South Wales.

“In the knot structure, C letters on the same strand of DNA bind to each other — so this is very different from a double helix, where ‘letters’ on opposite strands recognize each other, and where Cs bind to Gs [guanines].”

Although researchers have seen the i-motif before and have studied it in detail, it has only been witnessed in vitro — that is, under artificial conditions in the laboratory, and not inside cells. In fact, they have debated whether i-motif DNA structures would exist at all inside living things — a question that is resolved by the new findings.

To detect the i-motifs inside cells, Dr. Christ, Dr. Dinger and their colleagues developed a precise new tool — a fragment of an antibody molecule — that could specifically recognize and attach to i-motifs with a very high affinity.

Until now, the lack of an antibody that is specific for i-motifs has severely hampered the understanding of their role.

Crucially, the antibody fragment didn’t detect DNA in helical form, nor did it recognize ‘G-quadruplex structures’ (a structurally similar four-stranded DNA arrangement).

With the new tool, the team uncovered the location of ‘i-motifs’ in a range of human cell lines.

Using fluorescence techniques to pinpoint where the i-motifs were located, the study authors identified numerous spots of green within the nucleus, which indicate the position of i-motifs.

The scientists showed that i-motifs mostly form at a particular point in the cell’s ‘life cycle’ — the late G1 phase, when DNA is being actively ‘read.’

They also showed that i-motifs appear in some promoter regions — areas of DNA that control whether genes are switched on or off — and in telomeres, ‘end sections’ of chromosomes that are important in the aging process.

“We think the coming and going of the i-motifs is a clue to what they do. It seems likely that they are there to help switch genes on or off, and to affect whether a gene is actively read or not,” said study first author Dr. Mahdi Zeraati, also from the Garvan Institute of Medical Research and the University of New South Wales.

“We also think the transient nature of the i-motifs explains why they have been so very difficult to track down in cells until now,” Dr. Christ added.

“It’s exciting to uncover a whole new form of DNA in cells — and these findings will set the stage for a whole new push to understand what this new DNA shape is really for, and whether it will impact on health and disease,” Dr. Dinger said.

The team’s results appear in the journal Nature Chemistry.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Breaking Science News] April 24, 2018 at 03:11PM. Credit to Author and Breaking Science News | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

 

Scientists Find DNA Clues That Show How Humans Are Evolving Right Now

Your daily selection of the latest science news!

According to ScienceAlert


Evolution never really stops, so it stands to reason that we humans are still undergoing evolutionary changes. Now some researchers have figured out how, finding evidence in the human genome that our fertility and heart function is changing.

Natural selection isn’t like getting superpowers. It involves slowly wrought changes that take generations, and are often so subtle that we don’t even notice.

Geneticists from the University of Queensland in Australia have figured out a way to detect what those changes are – a statistical method to find mutations in the DNA.

Jian Yang, Jian Zeng and a team of researchers from the university’s Institute for Molecular Bioscience and Queensland Brain Institute studied the genomic data from 126,545 individuals in the UK Biobank, an anonymised health database in the UK.

They closely examined 28 complex traits, such as heel bone mineral density, male pattern baldness, BMI, female age at first menstruation and menopause, female age when giving live birth for the first time, grip strength, and hip-to-waist ratio.

By studying the genes associated with these traits in individuals at different ages, it’s possible to see differences between generations.

“In natural selection, or ‘survival of the fittest’, characteristics that improve survival are more likely to be passed on to the next generation,” Yang said.

“The opposite also occurs, when DNA mutations with a detrimental effect on fitness are less likely to be passed on, by a process called negative selection.

The researchers said they found evidence of negative selection – the removal of deleterious gene variants – in several traits. And the strongest evidence was in traits related to cardiovascular function and reproductive function.

For cardiovascular function, the team found changes associated with waist circumference and waist-to-hip ratio. An excess of fat around the waist had previously been found to be significantly linked to an increased risk of cardiovascular disease.

They also found evidence of changes in blood pressure.

But female age at menopause – associated with fertility – showed the strongest change. Age at first menstruation and age at first live birth also showed markers – which, the researchers said, made sense, since there’s a strong correlation between fertility and genetic fitness.

This isn’t the first time scientists have analysed materials from the Biobank for evolutionary changes in humans.

Last year, researchers from the University of California, Irvine studied the DNA of over 500,000 individuals, looking for both positive and negative selection. They found that evolution was favouring a higher BMI in men – probably due to muscle mass – and women that give birth younger.

Don’t get too excited, though. Other scientists found in a 2011 study that evolutionary changes develop fairly frequently, but don’t “stick” – it takes about a million years for an evolutionary trait to develop and last.

The point of the study isn’t necessarily to determine changes we’re going to see make a huge impact anytime soon, but to learn more about evolution, and how selection works.

“Negative selection prevents ‘bad’ mutations from spreading through the population, meaning that common DNA variants are likely to have small or no effect on traits,” Zeng said.

“This study will help us better understand the genetic basis of complex traits and inform the design of future experiments in complex traits and medical genomics.”

The team’s research has been published in the journal Nature Genetics.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [ScienceAlert] April 18, 2018 at 08:40AM. Credit to Author and ScienceAlert | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

90,000-Year-Old Finger Bone Found in Saudi Arabia Could Rewrite Human Migration History

Your daily selection of the latest science news!

According to Breaking Science News

An international team of researchers has discovered a fossilized human finger bone in the Nefud Desert of Saudi Arabia estimated to be about 90,000 years old. The discovery is described in the journal Nature Ecology and Evolution.

1.jpg

The small (just one inch, or 3.3 cm, long) bone was found at the site of Al Wusta, an ancient fresh-water lake located in what is now the hyper-arid desert.

Dubbed Al Wusta-1, the relic is the oldest directly dated Homo sapiens fossil outside of Africa and the Levant, and suggests that people traveled further than initially thought during the first reported human migration into Eurasia.

Prior to this discovery, it was widely believed that early ventures from Africa into Eurasia had been unsuccessful and only ever reached the parameters of the neighboring Mediterranean forests of the Levant.

“This discovery for the first time conclusively shows that early members of our species colonized an expansive region of southwest Asia and were not just restricted to the Levant,” said lead author Dr. Huw Groucutt, from the University of Oxford in the UK and the Max Planck Institute for the Science of Human History in Germany.

“The ability of these early people to widely colonize this region casts doubt on long held views that early dispersals out of Africa were localized and unsuccessful.”

To be sure of their find and date its origins, the Al Wusta-1 bone was scanned in 3D and its shape compared against fingers bones from other Homo sapiens, other early humans, such as Neanderthals and species of primates.

Using a technique called uranium series dating, a laser was used to make microscopic holes in the bone and measure the ratio between tiny traces of radioactive elements. These ratios revealed that the fossil was 88,000 years old.

Other dates obtained from associated fossils and sediments converged to a date of approximately 90,000 years ago.

In addition to the human remains, abundant stone tools made by humans and numerous animal fossils, including those of hippopotamus and tiny fresh water snails, were found at the site.

“The Arabian Peninsula has long been considered to be far from the main stage of human evolution,” said senior author Professor Michael Petraglia, from the Max Planck Institute for the Science of Human History.

“This discovery firmly puts Arabia on the map as a key region for understanding our origins and expansion to the rest of the world. As fieldwork carries on, we continue to make remarkable discoveries in Saudi Arabia.”

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Breaking Science News] April 11, 2018 at 12:14PM. Credit to Author and Breaking Science News | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Schizophrenia a side effect of human development

Your daily selection of the latest science news!

According to Medical Xpress

Schizophrenia may have evolved as an “unwanted side effect” of the development of the complex human brain, a new study has found.

The study identified changed gene expression in the area of the  that is most different between humans and animals, including our closest species, non-human primates.

Published in Schizophrenia, the study was undertaken by a group of researchers from Swinburne, The Florey Institute of Neuroscience & Mental Health and University of Melbourne. It reveals major changes in gene expression in the frontal area of the brains of those with schizophrenia.

“This is the area of our brain that evolved latest and most sets us apart from other species,” says Professor Brian Dean of Swinburne’s Centre for Mental Health and the Florey Institute.

“There is the argument that schizophrenia is an unwanted side effect of developing a complex human brain and our findings seem to support that argument.”

A genetic susceptibility

Schizophrenia is now thought to occur in people with a  after they encounter a harmful environmental factor such as premature birth or drug use.

“It’s thought that schizophrenia occurs when environmental factors trigger changes in gene expression in the human brain. Though this is not fully understood, our data suggests the frontal area of the brain is severely affected by such changes,” says Professor Dean.

While undertaking the research, Professor Dean’s team conducted a post-mortem  study in which they compared gene expression between 15 patients with schizophrenia and 15 without.

In the instance of brains from people known to have had schizophrenia, the team found 566 instances of altered  in the most frontal pole part of the brain, and fewer changes in proximal regions.

“These brain areas are known to mediate schizophrenia-related traits,” says Professor Dean.

A key finding in this study is a pathway containing 97 differentially-expressed  that contains a number of potential drug treatment targets that could particularly affect people with schizophrenia.

“A better understanding of changes in this pathway could suggest new drugs to treat the disorder,” says Professor Dean.

The study paints a complex picture of the causes of , he says but it suggests modern technologies can be used to help unravel these complexities.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Medical Xpress] February 21, 2018 at 08:38AM. Credit to Author and Medical Xpress

 

 

 

DNA shows first modern Briton had dark skin, blue eyes

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

A reconstruction model from the skull of 'Cheddar Man' after DNA analysis of the 10,000-year-old skeleton shows early Britons ha
A reconstruction model from the skull of ‘Cheddar Man’ after DNA analysis of the 10,000-year-old skeleton shows early Britons had dark skin and blue eyes

The first modern Briton had dark skin and blue eyes, London scientists said on Wednesday, following groundbreaking DNA analysis of the remains of a man who lived 10,000 years ago.

Known as “Cheddar Man” after the area in southwest England where his skeleton was discovered in a in 1903, the ancient man has been brought to life through the first ever full DNA analysis of his remains.

 

In a joint project between Britain’s Natural History Museum and University College London, scientists drilled a 2mm hole into the skull and extracted bone powder for analysis.

 

Their findings transformed the way they had previously seen Cheddar Man, who had been portrayed as having brown eyes and light skin in an earlier model.

 

“It is very surprising that a Brit 10,000 years ago could have that combination of very but really dark skin,” said the museum’s Chris Stringer, who for the past decade has analysed the bones of people found in the cave.

 

The findings suggest that lighter pigmentation being a feature of populations of northern Europe is more recent than previously thought.

 

Cheddar Man’s tribe migrated to Britain at the end of the last Ice Age and his DNA has been linked to individuals discovered in modern-day Spain, Hungary and Luxembourg.

Model makers Adrie (L) and Alfons Kennis created the bust of 'Cheddar Man' using a high-tech scanner which had been designed for

Model makers Adrie (L) and Alfons Kennis created the bust of ‘Cheddar Man’ using a high-tech scanner which had been designed for the International Space Station

Selina Brace, a researcher of ancient DNA at the museum, said the cave environment Cheddar Man was found in helped preserve his remains.

 

“In the cave you have a really nice, cool, dry, constant environment, and that basically prevents the DNA from breaking down,” she said.

 

A bust of Cheddar Man, complete with shoulder-length dark hair and short facial hair, was created using 3D printing.

 

It took close to three months to build the model, with its makers using a high-tech scanner which had been designed for the International Space Station.

 

Alfons Kennis, who made the bust with his brother Adrie, said the DNA findings were “revolutionary”.

 

“It’s a story all about migrations throughout history,” he told Channel 4 in a documentary to be aired on February 18.

 

“It maybe gets rid of the idea that you have to look a certain way to be from somewhere. We are all immigrants,” he added.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] February 7, 2018 at 01:45AM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

A whale with words: Orca mimics human speech

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

killer whale
Credit: CC0 Public Domain

Her head above water, Wikie the killer whale looks at the human trainer next to her pool, listens, then loudly vocalises: “Hello.”

It is not a perfect imitation, but, astonishingly, recognisable.

 

It is the first scientific demonstration of an orca mimicking human words, which also included “Amy”—the name of Wikie’s handler—”Bye-Bye”, and “One-Two-Three”.

 

“We were not expecting a perfect match, like a parrot,” researcher Jose Abramson of the Complutense University of Madrid said of the experiment reported Wednesday in the journal Proceedings of the Royal Society B.

 

Yet in a trial with six different words or phrases, some of Wikie’s attempts were “a very high quality match”, especially given that orcas’ vocal anatomy is “totally different” to ours.

 

It was hard not to jump for joy when Wikie first “spoke”, Abramson told AFP, adding the research team had not quite known what to expect.

 

“When we tried ‘hello’ and she did the sound… some emotional responses came from the trainers. For us (the scientists) it was very difficult not to say anything…”

 

Seeking to measure orcas’ ability to copy new sounds, Abramson and a team turned to Wikie, a captive killer whale at the Marineland Aquarium in Antibes, southern France.

 

Trained to perform tricks for Marineland visitors, Wikie was a good candidate as she had already learnt the gesture commanding her to “copy” what her trainer does.

 

As part of the trial, the killer whale was asked to mimic never-before-heard sounds made by other orcas with different dialects from different family groups.

 

Then, she was made to repeat human words.

 

In recordings of the experiment, Wikie takes several stabs at “hello”. Every time, she voices two syllables with something resembling an “l” in the middle and an “o” at the end.

 

Sign of culture

 

The most convincing attempt is a deep, throaty sound, a bit like a cartoon demon might say “hello”.

 

The orca also manages an eery whisper that does sound remarkably like “Amy”.

 

But she seems to have more trouble with “One-Two-Three”. The last syllable sounds a bit like a “raspberry”—that sound of contempt humans make by pushing the tongue between the lips and forcibly expelling air to produce a vibration.

 

Recordings of Wikie can be found here: figshare.com/collections/Suppl … inus_orca_i_/3982647

 

Abramson said the orca’s ability to mimic does not mean she understands what she is saying.

 

The experiment was designed in such a way that no meaning or context was attached to any of the words.

 

But it does show, once more, that orcas are very smart animals indeed, he added.

 

Imitation skills are a sign of intelligence, as they allow animals to learn lessons from peers.

 

The alternative, learning through trial and error, “can be every expensive… you can die just trying poisonous fish, for example, for . But if you learn from the experience of the others it’s more safe,” said Abramson.

 

“One of the main things that fired the evolution of human intelligence is the ability to have social learning, to imitate, and to have culture.

 

“So if you find that other species have also the capacity for social learning, and of complex that could be imitation or teaching, you expect a lot of flexibility in that species.”

 

This in turn, allows a species to adapt more easily to changes in their environment, improving survival chances, said the researcher.

 

Killer whales have previously been shown to mimic dolphin sounds.

 

Apart from parrots whose copycat skills are well-documented, beluga , dolphins, seals, and an Asian elephant were previously reported to have tried mimicking human language.


Explore further:
Biologist: Orca attacks on gray whales up in California bay

More information:
Imitation of novel conspecific and human speech sounds in the killer whale (Orcinus orca), Proceedings of the Royal Society B, rspb.royalsocietypublishing.or … .1098/rspb.2017.2171

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] January 31, 2018 at 02:45AM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Truck damages Peru’s ancient Nazca lines

Your daily selection of the latest science news!

According to Phys.org – latest science and technology news stories

This Peruvian Ministry of Culture picture shows damage caused by a truck that illegally entered the archaeological site where th
This Peruvian Ministry of Culture picture shows damage caused by a truck that illegally entered the archaeological site where the ancient Nazca lines are located on January 27

Peru’s ancient Nazca lines were damaged when a driver accidentally plowed his cargo truck into the fragile archaeological site in the desert, officials said Tuesday.

The lines, considered a UNESCO World Heritage site, are enormous drawings of animals and plants etched in the ground some 2,000 years ago by a pre-Inca civilization. They are best seen from the sky.

 

The driver ignored warning signs as he entered the Nazca archaeological zone on January 27, the Ministry of Culture said in a statement.

 

The truck “left deep prints in an area approximately 100 meters long,” damaging “parts of three straight lined geoglyphs,” the statement read.

 

Security guards detained the driver and filed charges against him at the local police station, the statement added.

 

Entering the area is strictly prohibited due to the fragility of the soil around the lines, and access is only allowed wearing special foam-covered foot gear, according to Peruvian authorities.

 

The lines criss-cross the Peruvian desert over more than 500 square kilometers (200 square miles).

 

Created between 500 BC and AD 500 by the Nazca people, they have long intrigued archaeologists with the mystery of their size and their meticulously drawn figures.

 

Some of the drawings depict living creatures, others stylized plants or fantastical beings, others geometric figures that stretch for kilometers (miles).

 

This is not the first time the Nazca lines have been damaged in recent years.

 

In September 2015 a man was detained after he entered the site and wrote his name on one of the geoglyphs.

 

In December 2014, Greenpeace activists set up large letters beside one of the designs, known as the Hummingbird, that read: “Time for change! The future is renewable.”

 

The protest drew a furious reaction from Peru, which at the time was hosting UN talks aimed at curbing global warming.

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Phys.org – latest science and technology news stories] January 30, 2018 at 03:21PM. Credit to Author and Phys.org – latest science and technology news stories | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Eyes and Eardrums Move in Sync, Researchers Discover

Your daily selection of the latest science news!

According to Breaking Science News

Duke University Professor Jennifer Groh and co-authors have found that keeping the head still but shifting the eyes to one side or the other sparks vibrations in the eardrums, even in the absence of any sounds. Surprisingly, these vibrations start slightly before the eyes move, indicating that motion in the ears and the eyes are controlled by the same motor commands deep within the brain.

The eardrums move when the eyes move. Image credit: Jessi Cruger & David Murphy, Duke University.

Our eyes and ears work together to make sense of the sights and sounds around us. Most people find it easier to understand somebody if they are looking at them and watching their lips move.

And in a famous illusion called the McGurk Effect, videos of lip cues dubbed with mismatched audio cause people to hear the wrong sound.

But scientists are still puzzling over where and how the brain combines these two very different types of sensory information.

“Our brains would like to match up what we see and what we hear according to where these stimuli are coming from, but the visual system and the auditory system figure out where stimuli are located in two completely different ways,” Professor Groh said.

“The eyes are giving you a camera-like snapshot of the visual scene, whereas for sounds, you have to calculate where they are coming from based on differences in timing and loudness across the two ears.”

“Because the eyes are usually darting about within the head, the visual and auditory worlds are constantly in flux with respect to one another.”

In the experiments, 16 participants were asked to sit in a dark room and follow shifting LED lights with their eyes.

Each participant also wore small microphones in their ear canals that were sensitive enough to pick up the slight vibrations created when the eardrum sways back and forth.

Though eardrums vibrate primarily in response to outside sounds, the brain can also control their movements using small bones in the middle ear and hair cells in the cochlea.

These mechanisms help modulate the volume of sounds that ultimately reach the inner ear and brain, and produce small sounds known as otoacoustic emissions.

Professor Groh and colleagues found that when the eyes moved, both eardrums moved in sync with one another, one side bulging inward at the same time the other side bulged outward.

They continued to vibrate back and forth together until shortly after the eyes stopped moving. Eye movements in opposite directions produced opposite patterns of vibrations.

Larger eye movements also triggered bigger vibrations than smaller eye movements.

“The fact that these eardrum movements are encoding spatial information about eye movements means that they may be useful for helping our brains merge visual and auditory space,” said co-author David Murphy, a doctoral student at Duke University.

“It could also signify a marker of a healthy interaction between the auditory and visual systems.”

The findings appear in the Proceedings of the National Academy of Sciences.

_____

Kurtis G. Gruters et al. The eardrums move when the eyes move: A multisensory effect on the mechanics of hearing. PNAS, published online January 23, 2018; doi: 10.1073/pnas.1717948115

 

 

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Breaking Science News] January 24, 2018 at 11:56AM. Credit to Author and Breaking Science News | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Working Human Mini Muscles Grown from Skin Cells in Scientific First

Your daily selection of the latest science news!

According to Live Science


Scientists have created tiny artificial human muscles that contract and respond to neural and electrical stimuli just like real muscles do, a new study reports. There’s just one twist: The functioning muscle fibers were made from skin cells, not muscle cells.

 
Previously, scientists have been able to make muscle cells from other types of cells; however, no one so far has managed to make functioning muscle fibers from anything other than muscle cells. (Muscle fibers are groups of muscle cells.) The successful experiment, detailed in an article published today (Jan. 9) in the journal Nature Communications, could help researchers better study genetic muscular dystrophies, and test new treatments.

 
In the study, the researchers beganby taking cells from skin samplesfrom humans. They used a known technique to turn these cells into so-called induced pluripotent stem cells — cells that can transform into any type of human cell. Then, using a new method they developed, the scientists were able to turn these pluripotent stem cells into muscle stem cells, which are called myogenic progenitors. [5 Amazing Technologies That Are Revolutionizing Biotech]

 
“We take these induced pluripotent stem cells made from a person and then we make them into muscle cells by having them express a protein called Pax7, which signals to the cells to change into muscle cells,” said senior study author Nenad Bursac, a professor of biomedical engineering at Duke University in North Carolina. “It takes about three weeks until they become reprogrammed.”

 
Using just one pluripotent stem cell taken from a donor, the researchers can create thousands of muscle stem cells, Bursac told Live Science. This is because once turned into muscle stem cells, these cells can multiply further.

 
Once the scientists had sufficient muscle stem cells to work with, they switched off the Pax7 protein (the one that signals for them to transform). Then, the muscle cells were placed in a 3D culture that contained various nutrients and growth factors that stimulate the cells to organize into muscle fibers.

 
After another three weeks, pieces of muscle tissue up to 2 centimeters (0.8 inches) long, almost 1 millimeter (0.004 inches) in diameter, formed in the solution, Bursac said.

 
Then, the tests begin. “We can subject these muscle tissues to all the classical physiological tests that you can measure in animals or in humans,” he said.

 
In this study, Bursac’s team built upon a breakthrough they had achieved three years ago, when they became the first team in the world to make functioning human muscle fibers from cells taken from muscle biopsies. But compared to those earlier samples, the fibers made from skin cells are considerably weaker, Bursac said. This is something his team wants to address in their future work, he added.

The development could significantly improve researchers’ ability to study genetic muscular diseases, such as Duchenne muscular dystrophy, which affects 1in 3,600 male infants worldwide.  People with Duchenne muscular dystrophy start having muscle weakness at about age 4. The condition quickly progresses and by age 12, the patients lose their ability to walk. Most die by age 26, according to available estimates.

 
“In genetic diseases in pediatric patients the muscles are already damaged and it’s not good for them if we take biopsies,” Bursac said. “This method allows us to generate muscle samples from their skin or blood samples.” [Meet Your Muscles: 6 Remarkable Human Muscles]

 
Since the fibers that the scientists created in the study are fully functioning, the researchers can now study how they respond to various treatments, Bursac said.

 
“By being able to form functioning muscle, we can really study various parameters and see whether certain therapies can lead to improvement in muscle strength and muscle contraction,” Bursac said. “We hope that this will be more predictive than animal studies.”

 
Bursac noted that some drugs that work in mice could be toxic for humans. Having such artificial human muscle fibers would therefore streamline the development of new safe treatments, he said.

 
Still, the muscle fibers the researchers grew in the lab were quite small. The size of the muscle fibers that can be grown is currently limited because bioengineers are not able to create vessels long enough to support larger samples than a centimeter or two, Bursac said. This hinders the entire bioengineering field, he added.

 
He hopes the technique could possibly be used in the future to re-engineer a patient’s damaged cells into healthy cells and use the resulting muscle fibers to improve the patient’s quality of life.

 
“Because of the size limit that we have, we cant use this to treat big muscle injuries,” Bursac said. “But if there is a localized injury, particularly to specific muscles, then tissue engineering applications as this one could be used for the local repair of the muscle.”

Read more…

  • Got any news, tips or want to contact us directly? Email esistme@gmail.com

__

This article and images were originally posted on [Live Science] January 9, 2018 at 03:41PM. Credit to Author and Live Science | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

Science has outgrown the human mind

Your daily selection of the latest science news!

According to Quartz

1.jpg

The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and… attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency. –Ibn al-Haytham (965-1040 CE)

Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year. Meanwhile, the quality of the scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.

The twin challenges of too much quantity and too little quality are rooted in the finite neurological capacity of the human mind. Scientists are deriving hypotheses from a smaller and smaller fraction of our collective knowledge and consequently, more and more, asking the wrong questions, or asking ones that have already been answered. Also, human creativity seems to depend increasingly on the stochasticity of previous experiences–particular life events that allow a researcher to notice something others do not. Although chance has always been a factor in scientific discovery, it is currently playing a much larger role than it should.

One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated?

I believe it can, using an approach that we have known about for centuries. The answer to this question can be found in the work of Sir Francis Bacon, the 17th-century English philosopher and a key progenitor of modern science.

The first reiterations of the scientific method can be traced back many centuries earlier to Muslim thinkers such as Ibn al-Haytham, who emphasized both empiricism and experimentation. However, it was Bacon who first formalized the scientific method and made it a subject of study. In his book Novum Organum (1620), he proposed a model for discovery that is still known as the Baconian method. He argued against syllogistic logic for scientific synthesis, which he considered to be unreliable. Instead, he proposed an approach in which relevant observations about a specific phenomenon are systematically collected, tabulated and objectively analyzed using inductive logic to generate generalizable ideas. In his view, truth could be uncovered only when the mind is free from incomplete (and hence false) axioms.

The Baconian method attempted to remove logical bias from the process of observation and conceptualization, by delineating the steps of scientific synthesis and optimizing each one separately. Bacon’s vision was to leverage a community of observers to collect vast amounts of information about nature and tabulate it into a central record accessible to inductive analysis. In Novum Organum, he wrote: “Empiricists are like ants; they accumulate and use. Rationalists spin webs like spiders. The best method is that of the bee; it is somewhere in between, taking existing material and using it.”

The Baconian method is rarely used today. It proved too laborious and extravagantly expensive; its technological applications were unclear. However, at the time the formalization of a scientific method marked a revolutionary advance. Before it, science was metaphysical, accessible only to a few learned men, mostly of noble birth. By rejecting the authority of the ancient Greeks and delineating the steps of discovery, Bacon created a blueprint that would allow anyone, regardless of background, to become a scientist.

Bacon’s insights also revealed an important hidden truth: the discovery process is inherently algorithmic. It is the outcome of a finite number of steps that are repeated until a meaningful result is uncovered. Bacon explicitly used the word ‘machine’ in describing his method. His scientific algorithm has three essential components: first, observations have to be collected and integrated into the total corpus of knowledge. Second, the new observations are used to generate new hypotheses. Third, the hypotheses are tested through carefully designed experiments.

If science is algorithmic, then it must have the potential for automation. This futuristic dream has eluded information and computer scientists for decades, in large part because the three main steps of scientific discovery occupy different planes. Observation is sensual; hypothesis-generation is mental; and experimentation is mechanical. Automating the scientific process will require the effective incorporation of machines in each step, and in all three feeding into each other without friction. Nobody has yet figured out how to do that.

Experimentation has seen the most substantial recent progress. For example, the pharmaceutical industry commonly uses automated high-throughput platforms for drug design. Startups such as Transcriptic and Emerald Cloud Lab, both in California, are building systems to automate almost every physical task that biomedical scientists do. Scientists can submit their experiments online, where they are converted to code and fed into robotic platforms that carry out a battery of biological experiments. These solutions are most relevant to disciplines that require intensive experimentation, such as molecular biology and chemical engineering, but analogous methods can be applied in other data-intensive fields, and even extended to theoretical disciplines.

Automated hypothesis-generation is less advanced, but the work of Don Swanson in the 1980s provided an important step forward. He demonstrated the existence of hidden links between unrelated ideas in the scientific literature; using a simple deductive logical framework, he could connect papers from various fields with no citation overlap. In this way, Swanson was able to hypothesize a novel link between dietary fish oil and Reynaud’s Syndrome without conducting any experiments or being an expert in either field. Other, more recent approaches, such as those of Andrey Rzhetsky at the University of Chicago and Albert-László Barabási at Northeastern University, rely on mathematical modeling and graph theory. They incorporate large datasets, in which knowledge is projected as a network, where nodes are concepts and links are relationships between them. Novel hypotheses would show up as undiscovered links between nodes.

The most challenging step in the automation process is how to collect reliable scientific observations on a large scale. There is currently no central data bank that holds humanity’s total scientific knowledge on an observational level. Natural language-processing has advanced to the point at which it can automatically extract not only relationships but also context from scientific papers. However, major scientific publishers have placed severe restrictions on text-mining. More important, the text of papers is biased towards the scientist’s interpretations (or misconceptions), and it contains synthesized complex concepts and methodologies that are difficult to extract and quantify.

Nevertheless, recent advances in computing and networked databases make the Baconian method practical for the first time in history. And even before scientific discovery can be automated, embracing Bacon’s approach could prove valuable at a time when pure reductionism is reaching the edge of its usefulness.

Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge. It would also provide a much-needed reminder of what science is supposed to be: truth-seeking, anti-authoritarian, and limitlessly free.Aeon counter – do not remove

Read more…

__

This article and images were originally posted on [Quartz] November 18, 2017 at 07:04AM

Credit to Author and Quartz | ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

No, there has not been a successful human head transplant

Your daily selection of the latest science news!

According to Popular Science


In a 2015 TEDx talk, Sergio Canavero made a bold and tantalizing claim: by 2017, he swore, he would conduct the first human head transplant. And if you’ve been paying attention to trending headlines today, you might think he’s followed through on that promise.

He has not.

Canavero has not completed a successful human head transplant, and it is very unlikely that he will ever do so.

We repeat: No one has completed a successful human head transplant.

Here’s what you need to know:

The claim

Sergio Canavero has popped in and out of medical news for the past several years, but made headlines in 2015 when he found a willing subject for the surgery he hoped to perfect: the human head transplant.

A human head transplant is exactly what it sounds like (except for the fact that we should really call it a body transplant, but whatever). The patient—likely someone with a degenerative muscle disease—would have their head removed and attached to a donated body. In theory, one could fix just about any physical ailment with this transplantation. If you’d been paralyzed, you could pop the part of you that makes you you onto a fully-functional body. If multiple organs were set to fail, you could get yourself a whole new set instead of trying your luck on transplant waiting lists.

“For too long nature has dictated her rules to us,” Canavero said at a press conference. “We’re born, we grow, we age and we die. For millions of years humans has evolved and 100 billion humans have died. That’s genocide on a mass scale. We have entered an age where we will take our destiny back in our hands. It will change everything. It will change you at every level.”

If such a procedure became widely available, it could set up some mind-blowing shifts in human society. At best, we could live in a Twilight Zone-esque world where anyone with enough cash could rotate through perfect young bodies for as long as doctors could keep their brains healthy (which still sounds pretty awful). At worst, well, anyone who’s seen the movie Get Out can imagine a few horrific unintended outcomes.

What’s so difficult about transplanting a head?

Beyond the kind-of-icky implications of the procedure (Frankenstein and the taboos around interfering with dead bodies spring to mind), it’s not an idea completely devoid of merit. Most scientists and physicians would argue that time is better spent perfecting the procedures we use to solve problems piece by piece, but it would be great if one surgery could have a quadriplegic walking again. So why isn’t this something loads of people are working on?

Some things are relatively easy to transplant. Take the heart, for example. Yes, heart surgery is inherently dangerous, but there are relatively few pipes for doctors to reconnect to the recipient’s plumbing system.

The spinal cord is very different.

Doctors have never successfully reconnected a fully detached spinal cord. For a completely severed spinal cord to be brought back to functionality there are millions of nerve connections that need to be linked back together, and these are wildly difficult to rejoin.

Consider the recent rash of groundbreaking new transplants like ones to replace the penis, face, hands, or uterus. Each carried its own controversies, and required years of collaboration among top surgeons in their respective specialties. In 2017, we’ve only just begun to attach hands in such a way that nerves will connect and operate well enough to make the appendages useable. Accomplishing the same feat with an entire body would be a monumental achievement.

Canavero has previously claimed success in fusing severed spinal cords in mice, but the results he published left some experts skeptical. In fact, many question whether he even expects to be successful in the endeavor.

Another issue is the brain, which is a unique and delicate organ. It starts to degrade beyond repair within minutes of losing its blood supply. A freshly-harvested heart, packed in ice, can survive an airlift to the chest it will soon call home. Even cooled down, could a brain be held in stasis long enough to survive as surgeons plucked it from its native blood supply and meticulously made the connections that would provide it with the support of a new body? It does not seem likely, especially when one considers the fact that any damage to the brain could potentially negate the entire purpose of the transplant. The patient would want to put their self into a new body, not potentially gain some health and mobility at the risk of losing their personality or intellect.

That brings us to another hurdle. Unlike the transplantation of external organs, the receipt of things like penises and faces, and hands pose a high risk of psychological rejection. The first successful penis donation was cut short when the distressed patient told doctors to remove his new genitalia. Face transplants pose a similar problem; all organ recipients must take drugs to suppress a rejection of the organ by their immune system, but having a cadaver’s tissue in the place where you once saw something as familiar and defining as your own face or penis can be deeply troubling.

How much greater would this uneasiness stand to be in patients who received entirely new bodies? And how would it feel to know that the drugs you took to prevent rejection were actually fighting to keep the body from rejecting you? The thought of a patient grappling with the knowledge that their new-found body is desperately fighting to kill their brain is disturbing, to say the least.

Why are people saying he transplanted a head?

For starters, not everyone can be as scrupulous in their health and science reporting as good ol’ PopSci. It sounds like a really cool thing, so outlets ran with it. But we digress.

Canavero’s “successful” transplant was conducted using two corpses. Now, it’s important to perform brand new surgeries on corpses. It’s not something you want to just free-hand with a live patient without a little practice. The roads to penis, hand, and face transplants were all littered with corpses. But those surgeries were not considered successfully completed until doctors graduated from dry-runs with the dead to actual treatment of the living.

Canavero announced an 18-hour surgery on a cadaver in China, and says he’ll move on to perfecting the procedure with brain-dead organ donors soon.

“Maybe the procedure did make a good show of ‘attaching’ the nerves and blood vessels on the broad scale, but, so what? That’s just the start of what’s required for a working bodily system,” Neuroscientist Dean Burnett wrote in The Guardian. “There’s still a way to go. You can weld two halves of different cars together and call it a success if you like, but if the moment you turn the key in the ignition the whole thing explodes, most would be hard pressed to back you up on your brilliance.”

Burnett notes that this is par for the course for Canavero, who often boasts success upon the dubious completion of experiments that most researchers would not consider especially promising.

The best case scenario is that Canavero is simply jumping the gun in a major way. Perhaps he really is making headway (sorry) and will one day have the data and surgical techniques required to attempt such a procedure on a living human. Or if not, then maybe some of the steps he’s taking along the way—improving our abilities to preserve brain function without blood flow, coming up with better ways of healing serious spinal wounds—will pay off in ways the mainstream medical community will come to thank him for.

However, it seems more likely that the surgeon is all talk. His tendency to flout so-called success to the press instead of publishing papers for his scientific peers to review would suggest so. If he’s really and truly figured out how to fuse two unrelated spinal columns together, why on earth hasn’t he shared his methods with surgeons who work on spinal injuries?

Whatever the case, one thing is definitely true. Even if head transplants have any potential to save or improve lives, there are still far more feasible surgeries and therapies in the works. It is highly unlikely that body transplants will ever become a go-to treatment, and they probably won’t ever exist at all.

Read more…

__

This article and images were originally posted on [Popular Science] November 17, 2017 at 03:06PM

Credit to Author and Popular Science

 

 

 

 

Human Brain Organoids Implanted Into Rodents

Your daily selection of the latest science news!

According to RealClearScience

1.jpg

Minuscule blobs of human brain tissue have come a long way in the four years since scientists in Vienna discovered how to create them from stem cells.

The most advanced of these human brain organoids — no bigger than a lentil and, until now, existing only in test tubes — pulse with the kind of electrical activity that animates actual brains. They give birth to new neurons, much like full-blown brains. And they develop the six layers of the human cortex, the region responsible for thought, speech, judgment, and other advanced cognitive functions.

These micro quasi-brains are revolutionizing research on human brain development and diseases from Alzheimer’s to Zika, but the headlong rush to grow the most realistic, most highly developed brain organoids has thrown researchers into uncharted ethical waters. Like virtually all experts in the field, neuroscientist Hongjun Song of the University of Pennsylvania doesn’t “believe an organoid in a dish can think,” he said, “but it’s an issue we need to discuss.”

Those discussions will become more urgent after this weekend. At a neuroscience meeting, two teams of researchers will report implanting human brain organoids into the brains of lab rats and mice, raising the prospect that the organized, functional human tissue could develop further within a rodent. Separately, another lab has confirmed to STAT that it has connected human brain organoids to blood vessels, the first step toward giving them a blood supply.

That is necessary if the organoids are to grow bigger, probably the only way they can mimic fully grown brains and show how disorders such as autism, epilepsy, and schizophrenia unfold. But “vascularization” of cerebral organoids also raises such troubling ethical concerns that, previously, the lab paused its efforts to even try it.

Read more…

__

This article and images were originally posted on [RealClearScience – Homepage] November 8, 2017 at 12:35AM

Credit to Author and RealClearScience| ESIST.T>G>S Recommended Articles Of The Day

 

 

 

 

Pufferfish and humans share the same genes for teeth

Human teeth evolved from the same genes that make the bizarre beaked teeth of the pufferfish, according to new research by an international team of scientists.

The study, led by Dr Gareth Fraser from the University of Sheffield’s Department of Animal and Plant Sciences, has revealed that the pufferfish has a remarkably similar tooth-making programme to other vertebrates, including humans.

Published in the journal PNAS, the research has found that all vertebrates have some form of dental regeneration potential. However the pufferfish use the same stem cells for tooth regeneration as humans do but only replace some teeth with elongated bands that form their characteristic beak.

The study’s authors, which include researchers from the Natural History Museum London and the University of Tokyo, believe the research can now be used to address questions of tooth loss in humans.

“Our study questioned how pufferfish make a beak and now we’ve discovered the stem cells responsible and the genes that govern this process of continuous regeneration. These are also involved in general vertebrate tooth regeneration, including in humans,” Dr Fraser said.

He added: “The fact that all vertebrates regenerate their teeth in the same way with a set of conserved stem cells means that we can use these studies in more obscure fishes to provide clues to how we can address questions of tooth loss in humans.”

The unique pufferfish beak is one of the most extraordinary forms of evolutionary novelty. This bizarre structure has evolved through the modification of dental replacement.

The beak is composed of four elongated ‘tooth bands’ which are replaced again and again. However, instead of losing teeth when they are replaced, the pufferfish fuses multiple generations of teeth together, which gives rise to the beak, enabling them to crush incredibly hard prey.

Students at Sheffield have access to the latest innovations in animal and plant sciences — giving them an opportunity to deepen human understanding of organisms, ecosystems and the interdependencies of life to build a sustainable future.

Alex Thiery, a PhD student at the University of Sheffield who contributed to the study said: “We are interested in the developmental origin of the pufferfish beak as it presents a special opportunity to understand how evolutionary novelty can arise in vertebrates more generally.

“Vertebrates are extraordinarily diverse, however this doesn’t mean that they are dissimilar in the way in which they develop. Our work on the pufferfish beak demonstrates the dramatic effect that small changes in development can have.”

Common origins for hair, feathers and shark skin teeth

In an additional study published in the journal EvoDevo, Dr Gareth Fraser and his team from the University of Sheffield have also found that shark skin teeth (tooth-like scales called denticles) have the same developmental origins as reptile scales, bird feathers and human hair.

Previous studies have revealed that human hair, reptile scales and bird feathers evolved from a single ancestor — a reptile that lived 300 million years ago — but this new study from the Fraser Lab at Sheffield has found that the skin teeth found on sharks also developed from the same genes.

Sharks belong to a more basal group of vertebrates and their scales have been observed in the fossil record over the course of 450 million years of evolution, so the Sheffield researchers believe this indicates that all vertebrates, whether they live on land or in the sea, share the same developmental programme for skin, teeth and hair that has remained relatively unchanged throughout vertebrate evolution.

“Our study suggests the same genes are instrumental in the early development of all skin appendages from feathers and hair to shark skin teeth. Even though the final structures are very different this paper reveals that the developmental origins of all these structures are similar. Evolution has therefore used these common underpinnings as a foundation that can be modified over time to produce the vast diversity of skin structures seen in vertebrates,” Dr Fraser added.

__

This article and images was originally posted on [Latest Science News] May 16, 2017 at 03:04AM

by Sheffield, University

 

 

 

Tensions Flare as Scientists Go Public With Plan to Build Synthetic Human DNA

One of the greatest ethical debates in science – manipulating the fundamental building blocks of life – is set to heat up once more.

According to scientists behind an ambitious and controversial plan to write the human genome from the ground up, synthesising DNA and incorporating it into mammalian and even human cells could be as little as four to five years away.

Nearly 200 leading researchers in genetics and bioengineering are expected to attend a meeting in New York City next week, to discuss the next stages of what is now called the Genome Project-write (GP-write) plan: a US$100 million venture to research, engineer, and test living systems of model organisms, including the human genome.

Framed as a follow-up to the pioneering Human Genome Project (HGP) – which culminated in 2003 after 13 years of research that mapped the human genetic code – this project is billed as the logical next step, where scientists will learn how to cost-effectively synthesise plant, animal, and eventually human DNA.

“HGP allowed us to read the genome, but we still don’t completely understand it,” GP-write coordinator Nancy J. Kelley told Alex Ossola at CNBC.

While those involved are eager to portray the project as an open, international collaboration designed to further our understandings of genome science, GP-write provoked considerable controversy after its first large meet-up a year ago was conducted virtually in secret, with a select group of invite-only experts holding talks behind closed doors.

“Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real … should not take place without open and advance consideration of whether it is morally right to proceed,” medical ethicist Laurie Zoloth from Northwestern University and synthetic biologist Drew Endy of Stanford University wrote at the time for Cosmos Magazine.

Since then, the researchers behind the initiative have been more candid, announcing details of the project in a paper in Science, as well as releasing a white paper outlining GP-write’s timeline and goals.

One of GP-write’s lead scientists – geneticist and biochemist Jef Boeke from NYU Langone Medical Centre – says the approach has always been to consult the scientific community at large, to help frame and steer the research as it develops.

“I think articulation of our plan not to start right off synthesising a full human genome tomorrow was helpful. We have a four- to five-year period where there can be plenty of time for debate about the wisdom of that, whether resources should be put in that direction or in another,” he told CNBC.

“Whenever it’s human, everyone has an opinion and wants their voice to be heard. We want to hear what people have to say.”

But while that conversation is taking place, the science is developing regardless.

In March, Boeke shared details on a related project he’s involved with, where he oversees hundreds of scientists who are working together to synthesise an artificial yeast genome, which is expected to be complete by the end of 2017.

There might be a large gap between successfully synthesising yeast DNA and creating artificial human DNA from scratch. But the overall goal is to figure out how to synthesise comparatively simple genetic codes (such as microbial and plant DNA), before moving on to the ultimate prize.

“If you do that, you gain a much deeper understanding of how a complicated apparatus goes,” says Boeke. “Really, a synthetic genome is an engine for learning new information.”

Under its new organisational structure, GP-write is the parent project, which encompasses the core area of Human Genome Project-write (HGP-write), focussed on synthesising human genomes in whole or in part.

In addition to synthesising plant, animal, and human DNA, the primary goal of the project is to lower the cost of engineering genomes.

At present, it’s estimated to cost about 10 US cents to synthesise every base pair of nucleobase molecules that make up our DNA – and given humans have about 3 billion of these pairs, that makes for some pretty prohibitively expensive synthesis.

The plan is to reduce this cost by more than 1,000-fold within 10 years.

If that happens, the lower expense involved in synthesising DNA could unlock all kinds of new potential medical treatments – targeting illnesses such as cancer and genetic diseases, helping the body to accept organ transplants, and learning more about immunity to viruses.

Of course, before that can happen, GP-write’s organisers need to raise an estimated US$100 million in funding – which will be another of the drivers of next week’s get together.

It’s an incredibly exciting undertaking, although there’s bound to be more controversy as GP-write marches ahead.

__

This article and images was originally posted on [ScienceAlert] May 2, 2017 at 06:13PM

by PETER DOCKRILL

 

 

 

DNA’s secret weapon against knots and tangles

M. Imakaev/G. Fudenberg/N. Naumova/J. Dekker/L. Mirny

DNA loops help to keep local regions of the genome together.

Leonid Mirny swivels in his office chair and grabs the power cord for his laptop. He practically bounces in his seat as he threads the cable through his fingers, creating a doughnut-sized loop. “It’s a dynamic process of motors constantly extruding loops!” says Mirny, a biophysicist here at the Massachusetts Institute of Technology in Cambridge.

Mirny’s excitement isn’t about keeping computer accessories orderly. Rather, he’s talking about a central organizing principle of the genome — how roughly 2 metres of DNA can be squeezed into nearly every cell of the human body without getting tangled up like last year’s Christmas lights.

He argues that DNA is constantly being slipped through ring-like motor proteins to make loops. This process, called loop extrusion, helps to keep local regions of DNA together, disentangling them from other parts of the genome and even giving shape and structure to the chromosomes.

Scientists have bandied about similar hypotheses for decades, but Mirny’s model, and a similar one championed by Erez Lieberman Aiden, a geneticist at Baylor College of Medicine in Houston, Texas, add a new level of molecular detail at a time of explosive growth for research into the 3D structure of the genome. The models neatly explain the data flowing from high-profile projects on how different parts of the genome interact physically — which is why they’ve garnered so much attention.

But these simple explanations are not without controversy. Although it has become increasingly clear that genome looping regulates gene expression, possibly contributing to cell development and diseases such as cancer, the predictions of the models go beyond what anyone has ever seen experimentally.

For one thing, the identity of the molecular machine that forms the loops remains a mystery. If the leading protein candidate acted like a motor, as Mirny proposes, it would guzzle energy faster than it has ever been seen to do. “As a physicist friend of mine tells me, ‘This is kind of the Higgs boson of your field’,” says Mirny; it explains one of the deepest mysteries of genome biology, but could take years to prove.

And although Mirny’s model is extremely similar to Lieberman Aiden’s — and the differences esoteric — sorting out which is right is more than a matter of tying up loose ends. If Mirny is correct, “it’s a complete revolution in DNA enzymology”, says Kim Nasmyth, a leading chromosome researcher at the University of Oxford, UK. What’s actually powering the loop formation, he adds, “has got to be the biggest problem in genome biology right now”.

Loop back

Geneticists have known for more than three decades that the genome forms loops, bringing regulatory elements into close proximity with genes that they control. But it was unclear how these loops formed.

Several researchers have independently put forward versions of loop extrusion over the years. The first was Arthur Riggs, a geneticist at the Beckman Research Institute of City of Hope in Duarte, California, who first proposed what he called “DNA reeling” in an overlooked 1990 report1. Yet it’s Nasmyth who is most commonly credited with originating the concept.

As he tells it, the idea came to him in 2000, after a day spent mountain climbing in the Italian Alps. He and his colleagues had recently discovered the ring-like shape of cohesin2, a protein complex best known for helping to separate copies of chromosomes during cell division. As Nasmyth fiddled with his climbing gear, it dawned on him that chromosomes might be actively threaded through cohesin, or the related complex condensin, in much the same way as the ropes looped through his carabiners. “It appeared to explain everything,” he says.

Nasmyth described the idea in a few paragraphs in a massive, 73-page review article3. “Nobody took notice whatsoever,” he says — not even John Marko, a biophysicist at Northwestern University in Evanston, Illinois, who more than a decade later developed a mathematical model that complemented Nasmyth’s verbal argument4.

Mirny joined this loop-modelling club around five years ago. He wanted to explain data sets compiled by biologist Job Dekker, a frequent collaborator at the University of Massachusetts Medical School in Worcester. Dekker had been looking at physical interactions between different spots on chromosomes using a technique called Hi-C, in which scientists sequence bits of DNA that are close to one another and produce a map of each chromosome, usually depicted as a fractal-like chessboard. The darkest squares along the main diagonal represent spots of closest interaction.

The Hi-C snapshots that Dekker and his collaborators had taken revealed distinct compartmentalized loops, with interactions happening in discrete blocks of DNA between 200,000 and 1 million letters long5.

These ‘topologically associating domains’, or TADs, are a bit like the carriages on a crowded train. People can move about and bump into each other in the same carriage, but they can’t interact with passengers in adjacent carriages unless they slip between the end doors. The human genome may be 3 billion nucleotides long, but most interactions happen locally, within TADs.

Mirny and his team had been labouring for more than a year to explain TAD formation using computer simulations. Then, as luck would have it, Mirny happened to attend a conference at which Marko spoke about his then-unpublished model of loop extrusion. (Marko coined the term, which remains in use today.) It was the missing piece of Mirny’s puzzle. The researchers gave loop extrusion a try, and it worked. The physical act of forming the loops kept the local domains well organized. The model reproduced many of the finer-scale features of the Hi-C maps.

When Mirny and his colleagues posted their finished manuscript on the bioRxiv preprint server in August 2015, they were careful to describe the model in terms of a generic “loop-extruding factor”. But the paper didn’t shy away from speculating as to its identity: cohesin was the driving force behind the looping process for cells not in the middle of dividing, when chromosomes are loosely packed6. Condensin, they argued in a later paper, served this role during cell division, when the chromosomes are tightly wound7.

A key clue was the protein CTCF, which was known to interact with cohesin at the base of each loop of uncondensed chromosomes. For a long time, researchers had assumed that loops form on DNA when these CTCF proteins bump into one another at random and lock together. But if any two CTCF proteins could pair, why did loops form only locally, and not between distant sites?

Mirny’s model assumes that CTCFs act as stop signs for cohesin. If cohesin stops extruding DNA only when it hits CTCFs on each side of a growing loop, it will naturally bring the proteins together.

But singling out cohesin was “a big leap of faith”, says biophysicist Geoff Fudenberg, who did his PhD in Mirny’s lab and is now at the University of California, San Francisco. “No one has seen these motors doing these things in living cells or even in vitro,” he says. “But we see all of these different features of the data that line up and can be unified under this principle.”

Experiments had shown, for example, that reducing the amount of cohesin in a cell results in the formation of fewer loops8. Overactive cohesin creates so many loops that chromosomes smush up into structures that resemble tiny worms9.

The authors of these studies had trouble making sense of their results. Then came Mirny’s paper on bioRxiv. It was “the first time that a preprint has really changed the way people were thinking about stuff in this field”, says Matthias Merkenschlager, a cell biologist at the MRC London Institute of Medical Sciences. (Mirny’s team eventually published the work in May 2016, in Cell Reports6.)

Multiple discovery?

Lieberman Aiden says that the idea of loop extrusion first dawned on him during a conference call in March 2015. He and his former mentor, geneticist Eric Lander of the Broad Institute in Cambridge, Massachusetts, had published some of the most detailed, high-resolution Hi-C maps of the human genome available at the time10.

During his conference call, Lieberman Aiden was trying to explain a curious phenomenon in his data. Almost all the CTCF landing sites that anchored loops had the same orientation. What he realized was that CTCF, as a stop sign for extrusion, had inherent directionality. And just as motorists race through intersections with stop signs facing away from them, so a loop-extruding factor goes through CTCF sites unless the stop sign is facing the right way.

His lab tested the model by systematically deleting and flipping CTCF-binding sites, and remapping the chromosomes with Hi-C. Time and again, the data fitted the model. The team sent its paper for review in July 2015 and published the findings three months later11.

Mirny’s August 2015 bioRxiv paper didn’t have the same level of experimental validation, but it did include computer simulations to explain the directional bias of CTCF. In fact, both models make essentially the same predictions, leading some onlookers to speculate on whether Mirny seeded the idea. Lieberman Aiden insists that he came up with his model independently. “We submitted our paper before I ever saw their manuscript,” he says.

There are some tiny differences. The cartoons Mirny uses to describe his model seem to suggest that one cohesin ring does the extruding, whereas Lieberman Aiden’s contains two rings, connected like a pair of handcuffs (see ‘The taming of the tangles’). Suzana Hadjur, a cell biologist at University College London, calls this mechanistic nuance “absolutely fundamental” to determining cohesin’s role in the extrusion process.

Nik Spencer/Nature

Neither Lieberman Aiden nor Mirny say they have a strong opinion on whether the system uses one ring or two, but they do differ on cohesin’s central contribution to loop formation. Mirny maintains that the protein is the power source for looping, whereas Lieberman Aiden summarily dismisses this idea. Cohesin “is a big doughnut”, he says. It doesn’t do that much. “It can open and close, but we are very, very confident that cohesin itself is not a motor.”

Instead, he suspects that some other factor is pushing cohesin around, and many in the field agree. Claire Wyman, a molecular biophysicist at Erasmus University Medical Centre in Rotterdam, the Netherlands, points out that cohesin is only known to consume small amounts of energy for clasping and releasing DNA, so it’s a stretch to think of it motoring along the chromosome at the speeds required for Mirny’s model to work. “I’m willing to concede that it’s possible,” she says. “But the Magic 8-Ball would say that, ‘All signs point to no’.”

One group of proteins that might be doing the pushing is the RNA polymerases, the enzymes that create RNA from a DNA template. In a study online in Nature this week12, Jan-Michael Peters, a chromosome biologist at the Research Institute of Molecular Pathology in Vienna, and his colleagues show that RNA polymerases can move cohesin over long distances on the genome as they transcribe genes into RNA. “RNA polymerases are one type of motor that could contribute to loop extrusion,” Peters says. But, he adds, the data indicate that it cannot be the only force at play.

Frank Uhlmann, a biochemist at the Francis Crick Institute in London, offers an alternative that doesn’t require a motor protein at all. In his view, a cohesin complex might slide along DNA randomly until it hits a CTCF site and creates a loop. This model requires only nearby strands of DNA to interact randomly — which is much more probable, Uhlmann says. “We do not need to make any assumptions about activities that we don’t have experimental evidence for.”

Researchers are trying to gather experimental evidence for one model or another. At the Lawrence Livermore National Laboratory in California, for example, biophysicist Aleksandr Noy is attempting to watch loop extrusion in action in a test tube. He throws in just three ingredients: DNA, some ATP to provide energy, and the bacterial equivalent of cohesin and condensin, a protein complex known as SMC.

“We see evidence of DNA being compacted into these kinds of flowers with loops,” says Noy, who is collaborating with Mirny on the project. That suggests that SMC — and by extension cohesin — might have a motor function. But then again, it might not. “The truth is that we just don’t know at this point,” Noy says.

Bacterial battery

The experiment that perhaps comes the closest to showing cohesin acting as a motor was published in February13. David Rudner, a bacterial cell biologist at Harvard Medical School in Boston, Massachusetts, and his colleagues made time-lapse Hi-C maps of the bacterium Bacillus subtilis that reveal SMC zipping along the chromosome and creating a loop at a rate of more than 50,000 DNA letters per minute. This tempo is on par with what researchers estimate would be necessary for Mirny’s model to work in human cells as well.

Rudner hasn’t yet proved that SMC uses ATP to make that happen. But, he says, he’s close — and he would be “shocked” if cohesin worked differently in human cells.

For now, the debate rages about what cohesin is, or is not, doing inside the cell — and many researchers, including Doug Koshland, a cell biologist at the University of California, Berkeley, insist that a healthy dose of scepticism is still warranted when it comes to Mirny’s idea. “I am worried that the simplicity and elegance of the loop-extrusion model is already filling textbooks, coronated long before its time,” he says.

And although it may seem an academic dispute among specialists, Mirny notes that if it his model is correct, it will have real-world implications. In cancer, for instance, cohesin is frequently mutated and CTCF sites altered. Defective versions of cohesin have also been implicated in several rare human developmental disorders. If the loop-extruding process is to blame, says Mirny, then perhaps a better understanding of the motor could help fix the problem.

But his main interest remains more fundamental. He just wants to understand why DNA is configured in the way it is. And although his model assumes a lot of things about cohesin, Mirny says, “The problem is that I don’t know any other way to explain the formation of these loops.”

 

__

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on Nature

by Elie Dolgin

 

 

 

 

Human Umbilical Blood Has Regenerated the Brains of Elderly Mice

We swear this isn’t science fiction.

Researchers have regenerated the memories and learning abilities of elderly mice by injecting their brains with proteins taken from human umbilical cord blood.

The blood of human teenagers had previously been shown to rejuvenate ageing mice, but this new study shows that blood from the umbilical cords of babies could have even more powerful effects.

Based on these findings, the researchers suggest properties in umbilical blood could one day be used to slow down neurological degeneration in elderly human brains, too.

But these results are yet to be replicated in humans, so we can’t get too carried away.

“The really exciting thing about this study, and previous studies that have come before it, is that we’ve sort of tapped into previously unappreciated potential of our blood – our plasma – and what it can do for reversing the harmful effects of aging on the brain,” lead researcher Joe Castellano from Stanford University School of Medicine told NPR.

In the latest study, the researchers collected blood from humans at three different ages: babies’ umbilical cords; young people aged between 19 to 24 years old; and older people aged between 61 and 82.

The team then injected plasma taken from these blood samples into mice that were the equivalent of around 50 years old.

Impressively, the mice that received the plasma from umbilical cord blood started to perform better on behavioural tests than their peers, and their memories also improved – they were better at remembering the way out of a maze.

They also started building nests again, a skill old mice tend to lose.

On a cellular level, the researchers saw enhanced activity in the hippocampus – the part of the brain responsible for learning and memory, and one of the first regions to deteriorate in old age.

Similar but less impressive results were seen in the group of mice given the blood plasma of young adults (the 19-24 group), but there were no improvements seen in those treated with the blood of the older adults.

“Our findings reveal that human cord plasma contains plasticity-enhancing proteins of high translational value for targeting ageing- or disease-associated hippocampal dysfunction,” the researchers write in Nature.

The study comes on the back of a series of recent publications that suggests that there’s something rejuvenating in human blood that gradually declines as we age – and hints that we could one day use it to help stave off the effects of old age on our brains.

One new candidate for that anti-ageing secret ingredient is a protein called TIMP2, which the researchers found in unusually high levels in umbilical cord blood compared to blood from older people.

Previous studies using the blood of young mice had also found evidence that a protein called GDF11 could have similar restorative effects.

The problem now is that the researchers still aren’t sure how either of these proteins work to stimulate anti-ageing effects.

“It’s a bit of a black box experiment, because they don’t know what’s happening,” Philip Landfield, a neuroscientist at the University of Kentucky in Lexington, who wasn’t involved in the research, told Sara Reardon from Nature.

And other researchers are more skeptical –  Rob Howard from the University College London, who also wasn’t involved in this paper, told The Guardian that the lesson from Alzheimer’s research so far is that “almost everything works in the animals, and so far nothing works in humans”.

So we can’t get too excited just yet. But we might not have to wait too long for answers, with the research in humans already underway.

Already, a clinical trial is being carried out at Stanford University to test the effects of the blood of under-30s on people with Alzheimer’s. That trial is being led by Tony Wyss-Coray, one of the researchers who worked on this new paper.

And start-ups are also charging people ridiculous amounts to infuse them with the blood of young people, even though there’s no evidence it actually works in humans as yet.

With all this interest, it’s possible that in the next few years we’ll get the first insight into whether young blood could help us stave off old age like it does in mice.

How we then turn that knowledge into a non-creepy treatment will be a whole other story.

The research has been published in Nature.

__

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on ScienceAlert 

by FIONA MACDONALD

 

 

 

 

First evidence for higher state of consciousness found

Scientific evidence of a ‘higher’ state of consciousness has been found in a study led by the University of Sussex.

Neuroscientists observed a sustained increase in neural signal diversity – a measure of the complexity of brain activity – of people under the influence of , compared with when they were in a normal waking state.

The diversity of brain signals provides a mathematical index of the level of consciousness. For example, people who are awake have been shown to have more diverse neural activity using this scale than those who are asleep.

This, however, is the first study to show brain-signal diversity that is higher than baseline, that is higher than in someone who is simply ‘awake and aware’. Previous studies have tended to focus on lowered states of consciousness, such as sleep, anaesthesia, or the so-called ‘vegetative’ state.

The team say that more research is needed using more sophisticated and varied models to confirm the results but they are cautiously excited.

Professor Anil Seth, Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, said: “This finding shows that the brain-on-psychedelics behaves very differently from normal.

“During the psychedelic state, the electrical activity of the brain is less predictable and less ‘integrated’ than during normal conscious wakefulness – as measured by ‘global signal diversity’.

“Since this measure has already shown its value as a measure of ‘conscious level’, we can say that the psychedelic state appears as a higher ‘level’ of consciousness than normal – but only with respect to this specific mathematical measure.”

For the study, Michael Schartner, Adam Barrett and Professor Seth of the Sackler Centre reanalysed data that had previously been collected by Imperial College London and the University of Cardiff in which healthy volunteers were given one of three drugs known to induce a psychedelic state: psilocybin, ketamine and LSD.

Using brain imaging technology, they measured the tiny magnetic fields produced in the brain and found that, across all three drugs, this measure of conscious level – the neural signal diversity – was reliably higher.

This does not mean that the psychedelic state is a ‘better’ or more desirable state of consciousness, the researchers stress; instead, it shows that the psychedelic brain state is distinctive and can be related to other global changes in conscious level (e.g. sleep, anaesthesia) by application of a simple mathematical measure of signal diversity. Dr Muthukumaraswamy who was involved in all three initial studies commented: “That similar changes in signal diversity were found for all three drugs, despite their quite different pharmacology, is both very striking and also reassuring that the results are robust and repeatable.”

The findings could help inform discussions gathering momentum about the carefully-controlled medical use of such drugs, for example in treating severe depression.

Dr Robin Cahart-Harris of Imperial College London said: “Rigorous research into psychedelics is gaining increasing attention, not least because of the therapeutic potential that these drugs may have when used sensibly and under medical supervision.

“The present study’s findings help us understand what happens in people’s brains when they experience an expansion of their under psychedelics. People often say they experience insight under these drugs – and when this occurs in a therapeutic context, it can predict positive outcomes. The present findings may help us understand how this can happen.”

As well as helping to inform possible medical applications, the study adds to a growing scientific understanding of how conscious level (how conscious one is) and conscious content (what one is conscious of) are related to each other.

Professor Seth said: “We found correlations between the intensity of the psychedelic experience, as reported by volunteers, and changes in signal diversity. This suggests that our measure has close links not only to global brain changes induced by the drugs, but to those aspects of brain dynamics that underlie specific aspects of conscious experience.”

The research team are now working hard to identify how specific changes in information flow in the underlie specific aspects of psychedelic experience, like hallucinations.

The study is published in Scientific Reports.

In a striking coincidence, the release date of this paper (19th April, 2017) comes precisely 74 years to-the-day after Albert Hoffman – who first synthesized LSD in 1938 – conducted his first ‘self-experiment’ to discover its psychological effects.  This date, 19th April 1943, is widely known as ‘bicycle day’ in honour of Hoffman’s bicycle ride home following this first LSD trip.

More information: Scientific Reports, DOI: 10.1038/srep46421

__

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on medicalxpress.com

 

 

 

We’ll probably have to genetically augment our bodies to survive Mars
 

 

When it comes to space travel, there’s no shortage of enthusiasm to get humans to Mars, with Space X’s Elon Musk saying his company could take passengers to the Red Planet by 2025, and NASA being asked by Congress to achieve the mission by 2033.

But while making the trip could be technologically feasible in the next decade or two, are humans really physically and psychologically ready to abandon Earth and begin colonising the Red Planet?

Nope, not a chance, according to a recent paper by cognitive scientist Konrad Szocik from the University of Information Technology and Management in Poland.

Szocik argues that no amount of year-long Martian simulations on Earth or long-duration stays aboard the International Space Station (ISS) could prepare human astronauts for the challenges that Mars colonisation would provide.

“We cannot simulate the same physical and environmental conditions to reconstruct the Martian environment, I mean such traits like Martian microgravitation or radiation exposure,” Szocik told Elizabeth Howell at Seeker.

“Consequently, we cannot predict [the] physical and biological effects of humans living on Mars.”

In a recent article, Szocik and his co-authors discussed some of the political, cultural, and personal challenges Mars colonists would face, and in a nutshell, the team doesn’t think human beings could cut it on the Red Planet – not without making changes to our bodies to help us more easily adapt to the Martian environment.

“My idea is that [the] human body and mind is adapted to live in the terrestrial environment,” Szocik told Rae Paoletta at Gizmodo.

“Consequently, some particular physiological and psychological challenges during [the] journey and then during living on Mars probably will be too difficult for human beings to survive.”

While NASA astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko famously spent a year on the ISS – the ordeal was not without significant physiological effects and pains resulting from so much time living in space.

But those hardships would be much less than what travellers to Mars would experience, who would be making much longer journeys – and not knowing when or if they could ever return to Earth.

“These first astronauts will be aware that after the almost one-year journey, they will have to live on Mars for at least several years or probably their entire lives due to the fact that their return will most likely be technologically impossible,” the authors explain.

“Perhaps these first colonisers will know that their mission is a ‘one way ticket’.”

The researchers acknowledge that inducing travellers into a coma-like state might make the voyage itself more bearable, but once they’ve arrived, colonists will be faced with an environment where artificial life support is a constant requirement – that is, until some far-off, future terraforming technology can make Mars’ arid and freezing environment hospitable.

Until that happens, the researchers think that humanity’s best prospects for living on Mars would involve some kind of body or genetic altering that might give us a fighting chance of survival on a planet we’ve never had to evolve on.

“We claim that human beings are not evolutionally adapted to colonise cosmic environments,” the authors explain.

“We suggest that the best solution could be the artificial acceleration of the biological evolution of the astronauts before they start their space deep mission.”

While the team doesn’t provide details of what that would entail in their paper, Szocik told Gizmodo that “permanent solutions like genetical and/or surgical modifications” could make colonists capable of surviving on Mars in ways that unaltered humans can’t.

According to NASA’s former chief scientist for human research, Mark J. Shelhamer, while these ideas may be interesting and help further the discussion about what it will take for humans to adapt to Mars’ environment, once talk turns to genetics, you run into a minefield of other potential issues.

“Already, people have suggested selecting astronauts for genetic predisposition for such things as radiation resistance,” says Shelhamer.

“Of course, this idea is fraught with problems. For one, it’s illegal to make employment decisions based on genetic information. For another, there are usually unintended consequences when making manipulations like this, and who knows what might get worse if we pick and choose what we think needs to be made better.”

Those sound like pretty fair points – especially considering Szocik goes as far as to suggest that “human cloning or other similar methods” might ultimately be necessary to sustain colony populations over generations without running the risk of in-breeding between too few colonists.

Clearly, there’s a lot to work out here, and while some of the researchers’ ideas are definitely a bit out there, we’re going to need to think outside the box if we want to inhabit a planet that at its closest is about 56 million km (33.9 million miles) away.

For his part, Shelhamer is confident that the right kind of training will equip human travellers for the ordeals of their Mars journey – and if current estimateson when we can expect to see this happen are correct, we won’t have too long to wait to see if he’s right.

“I think we can give astronauts the tools – physical, mental, operational – so that they are, individually and as a group, resilient in the face of the unknown,” he told Gizmodo.

“What kind of person thrives in an extreme environment? What types of mission structures are in place to help that person? This needs to be examined systematically.”

The research is published in Space Policy.

__

join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on ScienceAlert

by PETER DOCKRILL

 

 

 

Scientists might finally understand one of the most basic but mysterious aspects of our heartbeats

When it comes to something as essential and universal as a heartbeat, you’d think scientists would have the science behind it pretty much figured out.

But for centuries, the physics behind how our hearts work has eluded researchers. The pumping makes sense, but no one had been able to completely explain how the heart fills up with blood. Now, a study has finally revealed the answer in one of the laws of physics.

For a little refresher, our hearts are roughly the size of a large human fist, and they’re made up of four chambers. The two upper ones are the atria, and the two lower ones are the ventricles.

HeartpicYaddah/Wikimedia

Deoxygenated blood leaves the right ventricle of the heart and travels to the lungs, and then returns as oxygenated blood via the left atrium.

This oxygenated blood is then pumped out of the left ventricle to supply oxygen to the rest of the body, before re-entering the heart’s right atrium.

So far, so good. But although scientists are very familiar with this process, what hasn’t been clear is how and why this happens. What physical process is causing those ventricles to fill up with blood?

Now, researchers led by the KTH Royal Institute of Technology in Sweden has used something called cardiovascular magnetic resonance imaging to track the size of the heart’s chambers as it beats, and have shown that the process comes down in part to hydraulic forces – the same phenomenon that power car brakes and forklifts.

It’s not just fascinating new insight into one of the most crucial processes in our body – the discovery could also pave the way for new treatment options for heart failure and disease.

“Although this might seem simple and obvious, the impact of the hydraulic force on the heart’s filling pattern has been overlooked,” said lead researcher Martin Ugander.

“Our observation is exciting since it can lead to new types of therapies for heart failure involving trying to reduce the size of the atrium.”

So how did we go this long without figuring that out? For years, we’ve understood only part of the puzzle.

Biologists know that a protein called titin in heart muscle cells acts as a spring that releases elastic energy, encouraging the ventricles to fill with blood. But that spring action on its own couldn’t explain the rapid amount of filling scientists were seeing.

CG HeartDrJanaOfficial/Wikimedia

In the latest study, the team turned to physics, instead. They used cardiovascular magnetic resonance imaging to measure the size of both chambers during diastole – the phase during which the ventricles are filling up with blood – in healthy hearts.

This allowed them to then create physical models – almost like a piston – of the heart chambers, and explain what was going on, according to the laws of physics.

They found that 10 to 60 percent of the peak driving force filling up the left ventricle during diastole wasn’t to do with the relaxing of the heart muscle. It was down to hydraulic force – the pressure a liquid exerts on an area.

Hydraulic force is the same force that powers car brakes, and it works thanks to something called Pascal’s principle.

In the heart, it’s driven by the size of the heart’s chambers in relation to each other. The top atrium is smaller than the ventricle throughout diastole, and the team showed that because of this, when the valve between the two chambers opens, blood will rush into the ventricle to equalise pressure.

You can see this in action – and in a handy balloon demonstration – in the video below:

Video via KTH Royal Institute of Technology

“The geometry of the heart thus determines the magnitude of the force,” a statement from the university explains.

“Hydraulic forces that help the heart’s chambers to fill with blood arise as a natural consequence of the fact that the atrium is smaller than the ventricle.”

When it comes to heart failure, many patients have problems with this diastole – or filling – phase. The team explains that it’s often seen in combination with an enlarged atrium.

Thanks to this new research, it’s now clear that if the atrium gets larger in proportion to the ventricle, then it reduces the hydraulic force and therefore the heart’s ability to be filled with blood.

“Much of the focus has been on the ventricular function in heart failure patients,” said one of the team, Elira Maksuti.

“We think it can be an important part of diagnosis and treatment to measure both the atrium and ventricle to find out their relative dimensions.”

This is just one paper, and more observations need to be done before we can change the way we view heart function and dysfunction entirely. But it’s awesome to know that there are still mysteries to solve about some of the most fundamental processes of our bodies.

The research has been published in Scientific Reports.

__

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted ScienceAlert

by FIONA MACDONALD

 

 

 

Could a Spacecraft Fly to the Sun?

NASA is sending the Solar Probe Plus spacecraft to within 4 million miles (6 million kilometers) of the sun in 2018. And the agency is taking every precaution to keep the craft from melting.

Credit: Johns Hopkins University Applied Physics Laboratory


Humans have sent spacecraft to the moon, Mars and even distant interstellar space, but could we send a spaceship to the scorching sun?

The answer is yes, and it’s happening soon.

In 2018, NASA plans to launch the Solar Probe Plus mission to the sun. Earth is about 93 million miles (149 million kilometers) from the sun, and Solar Probe Plus is slated to get within 4 million miles (6 million km) of the blazing star. [What Will Happen to Earth When the Sun Dies?]

“This is going to be our first mission to fly to the sun,” said Eric Christian, a NASA research scientist at Goddard Space Flight Center in Greenbelt, Maryland. “We can’t get to the very surface of the sun,” but the mission will get close enough to answer three important questions, he said.

First, the mission aims to reveal why the surface of the sun, called the photosphere, is not as hot as the sun’s atmosphere, called the corona. The surface of the sun is only about 10,000 degrees Fahrenheit (5,500 degrees Celsius). But the atmosphere above it is a sizzling 3.5 million F (2 million C), according to NASA.

“You’d think the farther away you get from a heat source, you’d get colder,” Christian told Live Science. “Why the atmosphere is hotter than the surface is a big puzzle.”

Second, scientists want to know how the solar wind gets its speed. “The sun blows a stream of charged particles in all directions at a million miles an hour,” he said. “But we don’t understand how that gets accelerated.”

People have known about the solar wind for years, as early observers noticed that the tails of comets always pointed away from the sun, even if the comet was traveling in another direction. This suggested that something — that is, the solar wind — was coming off the sun faster than the comet was moving, Christian said.

Third, the mission may ascertain why the sun occasionally emits high-energy particles — called solar energetic particles — that are a danger to unprotected astronauts and spacecraft.

Researchers have tried to figure out these mysteries from Earth, but “the trouble is we’re 93 million miles away,” Christian said. “[The distance makes] things get smeared out in a way that makes it hard to tell what’s happening at the sun.”

But flying to within 4 million miles of the sun has its challenges. The main challenge, unsurprisingly, is the heat. To deal with the extreme temperatures, NASA scientists have designed a 4.5-inch-thick (11.4 centimeters) carbon-composite shield, which is designed to withstand temperatures outside the spacecraft of 2,500 F (1,370 C), according to the Johns Hopkins University Applied Physics Laboratory, a NASA collaborator working on the Solar Probe Plus.

In addition, the probe will have special heat tubes called thermal radiators that will radiate heat that permeates the heat shield to open space, “so it doesn’t go to the instruments, which are sensitive to heat,” Christian said.

If these protections work as expected, the instruments in the probe will stay at room temperature, Christian said. [Is There Gravity in Space?]

The Solar Probe Plus will also be protected from radiation, which can damage the probe’s electrical circuits, especially its memory, he said.

The spacecraft will be unmanned, but if given enough time and money, NASA scientists could probably develop a spacecraft that could safely carry an astronaut to within 4 million miles of the sun, Christian said. However, the cost of a human life is great, and that’s a risk uncrewed missions don’t carry, he noted.

If all goes as planned, the Solar Probe Plus will be the closest that a human-made object has ever made it to the sun. Until now, the closest spacecraft were Helios 1 (launched in December 1974), which flew to within 29 million miles (47 million km) of the sun, and Helios 2 (launched in April 1976), which flew to within 1.8 million miles (3 million km) closer to the sun than Helios 1.

More recently, Messenger (launched in August 2004) explored Mercury, which is about 36 million miles (58 million km) from the sun.

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article and images was originally posted on livescience.com

By Laura Geggel

 

 

 

Creative people have better-connected brains, research finds

Creative people have better-connected brains, research finds
An MRI technique called diffusion tensor imaging traces the bundles of nerve fibers that carry electrical signals between different areas of the brain. Credit: Thomas Schultz, Wikimedia Commons


Seemingly countless self-help books and seminars tell you to tap into the right side of your brain to stimulate creativity. But forget the “right-brain” myth—a new study suggests it’s how well the two brain hemispheres communicate that sets highly creative people apart.

For the study, statisticians David Dunson of Duke University and Daniele Durante of the University of Padova analyzed the network of white matter connections among 68 separate brain regions in healthy college-age volunteers.

The brain’s white matter lies underneath the outer grey matter. It is composed of bundles of wires, or axons, which connect billions of neurons and carry electrical signals between them.

A team led by neuroscientist Rex Jung of the University of New Mexico collected the data using an MRI technique called , which allows researchers to peer through the skull of a living person and trace the paths of all the axons by following the movement of water along them. Computers then comb through each of the 1-gigabyte scans and convert them to three-dimensional maps—wiring diagrams of the brain.

Jung’s team used a combination of tests to assess creativity. Some were measures of a type of problem-solving called “divergent thinking,” or the ability to come up with many answers to a question. They asked people to draw as many geometric designs as they could in five minutes. They also asked people to list as many new uses as they could for everyday objects, such as a brick or a paper clip. The participants also filled out a questionnaire about their achievements in ten areas, including the visual arts, music, creative writing, dance, cooking and science.

The responses were used to calculate a composite creativity score for each person.

Dunson and Durante trained computers to sift through the data and identify differences in brain structure.

They found no statistical differences in connectivity within hemispheres, or between men and women. But when they compared people who scored in the top 15 percent on the creativity tests with those in the bottom 15 percent, high-scoring people had significantly more connections between the right and left hemispheres.

The differences were mainly in the brain’s frontal lobe.

Creative people have better-connected brains, research finds
Highly creative people have significantly more white matter connections (shown in green) between the right and left hemispheres of the brain, according to a new analysis. Credit: Daniele Durante, University of Padova


Dunson said their approach could also be used to predict the probability that a person will be highly creative simply based on his or her brain network structure. “Maybe by scanning a person’s brain we could tell what they’re likely to be good at,” Dunson said.

The study is part of a decade-old field, connectomics, which uses network science to understand the brain. Instead of focusing on specific brain regions in isolation, connectomics researchers use advanced brain imaging techniques to identify and map the rich, dense web of links between them.

Dunson and colleagues are now developing statistical methods to find out whether brain connectivity varies with I.Q., whose relationship to creativity is a subject of ongoing debate.

In collaboration with neurology professor Paul Thompson at the University of Southern California, they’re also using their methods for early detection of Alzheimer’s disease, to help distinguish it from normal aging.

By studying the patterns of interconnections in healthy and diseased brains, they and other researchers also hope to better understand dementia, epilepsy, schizophrenia and other neurological conditions such as or coma.

“Data sharing in neuroscience is increasingly more common as compared to only five years ago,” said Joshua Vogelstein of Johns Hopkins University, who founded the Open Connectome Project and processed the raw data for the study.

Just making sense of the enormous datasets produced by brain imaging studies is a challenge, Dunson said.

Most statistical methods for analyzing brain network data focus on estimating properties of single brains, such as which regions serve as highly connected hubs. But each person’s is wired differently, and techniques for identifying similarities and differences in connectivity across individuals and between groups have lagged behind.

The study appears online and will be published in a forthcoming issue of the journal Bayesian Analysis.

 

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article was originally posted on medicalxpress.com

Provided by: Duke University

 

 

 

B vitamins reduce schizophrenia symptoms, study finds

schizophrenia
Functional magnetic resonance imaging (fMRI) and other brain imaging technologies allow for the study of differences in brain activity in people diagnosed with schizophrenia. The image shows two levels of the brain, with areas that were more active in healthy controls than in schizophrenia patients shown in orange, during an fMRI study of working memory. Credit: Kim J, Matthews NL, Park S./PLoS One.


A review of worldwide studies has found that add-on treatment with high-dose b-vitamins – including B6, B8 and B12 – can significantly reduce symptoms of schizophrenia more than standard treatments alone.

The research – on the effect of vitamin and on of schizophrenia – is funded by The Medical Research Council and University of Manchester, and is published in Psychological Medicine, one of the world’s leading psychology journals

Lead author Joseph Firth, based at the University’s Division of Psychology and Mental Health, said: “Looking at all of the data from of vitamin and mineral supplements for schizophrenia to date, we can see that B vitamins effectively improve outcomes for some .

“This could be an important advance, given that new treatments for this condition are so desperately needed.”

Schizophrenia affects around 1% of the population and is among the most disabling and costly long term conditions worldwide.

Currently, treatment is based around the administration of antipsychotic drugs.

Although patients typically experience remission of symptoms such as hallucinations and delusions within the first few months of treatment, long-term outcomes are poor; 80% of patients relapse within five years.

The researchers reviewed all reporting effects of vitamin or mineral supplements on psychiatric symptoms in people with schizophrenia.

In what is the first meta-analysis carried out on this topic, they identified 18 clinical trials with a combined total of 832 patients receiving antipsychotic treatment for schizophrenia.

B-vitamin interventions which used higher dosages or combined several vitamins were consistently effective for reducing , whereas those which used lower doses were ineffective.

Also, the available evidence also suggests that B-vitamin supplements may be most beneficial when implemented early on, as b-vitamins were most likely to reduce symptoms when used in studies of patients with shorter illness durations.

Firth added: “High-dose B-vitamins may be useful for reducing residual symptoms in people with schizophrenia, although there were significant differences among the findings of the studies we looked at.”

“There is also some indication that these overall effects may be driven by larger benefits among subgroups of patients who have relevant genetic or dietary nutritional deficiencies.”

Co-author Jerome Sarris, Professor of Integrative Mental Health at Western Sydney University, added: “This builds on existing evidence of other food-derived supplements, such as certain amino-acids, been beneficial for people with .

“These new findings also fit with our latest research examining how multi-nutrient treatments can reduce depression and other disorders.”

The research team say more studies are now needed to discover how nutrients act on the brain to improve , and to measure effects of nutrient-based treatments on other outcomes such as brain functioning and metabolic health.website

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article was posted on medicalxpress.

Provided by: University of Manchester search and more info

 

 

 

Scientists show brain’s own opioids involved in musical pleasure

brain
Credit: Human Brain Project

The same brain-chemical system that mediates feelings of pleasure from sex, recreational drugs, and food is also critical to experiencing musical pleasure, according to a study by McGill University researchers published today in the Nature journal Scientific Reports.

“This is the first demonstration that the brain’s own opioids are directly involved in musical pleasure,” says cognitive psychologist Daniel Levitin, senior author of the paper. While by Levitin’s lab and others had used neuroimaging to map areas of the brain that are active during moments of musical pleasure, scientists were able only to infer the involvement of the opioid system.

In the new study, Levitin’s team at McGill selectively and temporarily blocked opioids in the brain using naltrexone, a widely prescribed drug for treating addiction disorders. The researchers then measured participants’ responses to music, and found that even the participants’ favorite songs no longer elicited feelings of pleasure.

“The findings, themselves, were what we hypothesized,” Levitin says. “But the anecdotes—the impressions our participants shared with us after the experiment—were fascinating. One said: ‘I know this is my favorite song but it doesn’t feel like it usually does.’ Another: ‘It sounds pretty, but it’s not doing anything for me.'”

Things that people enjoy – alcohol, sex, a friendly game of poker, to name a few – can also lead to addictive behaviors that can harm lives and relationships. So understanding the neurochemical roots of pleasure has been an important part of neuroscience research for decades. But scientists only recently developed the tools and methods to do such research in humans.

Still, this study proved to be “the most involved, difficult and Sisyphean task our lab has undertaken in 20 years of research,” Levitin says. “Anytime you give prescription drugs to college students who don’t need them for health reasons, you have to be very careful to ensure against any possible ill effects.” For example, all 17 participants were required to have had a blood test within the year preceding the experiment, to ensure they didn’t have any conditions that would be made worse by the drug.

Music’s universality and its ability to deeply affect emotions suggest an evolutionary origin, and the new findings “add to the growing body of evidence for the evolutionary biological substrates of music,” the researchers write.

This article was posted on medicalxpress.com

 

 

 

Swearing is actually a sign of more intelligence – not less – say scientists

The use of obscene or taboo language – or swearing, as it’s more commonly known – is often seen as a sign that the speaker lacks vocabulary, cannot express themselves in a less offensive way, or even lacks intelligence.

Studies have shown, however, that swearing may in fact display a more, rather than less, intelligent use of language.

While swearing can become a habit, we choose to swear in different contextsand for different purposes: for linguistic effect, to convey emotion, for laughs, or perhaps even to be deliberately nasty.

Psychologists interested in when and why people swear try to look past the stereotype that swearing is the language of the unintelligent and illiterate.

In fact, a study by psychologists from Marist College found links between how fluent a person is in the English language and how fluent they are in swearing.

The former – verbal fluency – can be measured by asking volunteers to think of as many words beginning with a certain letter of the alphabet as they can in 1 minute.

People with greater language skills can generally think of more examples in the allotted time. Based on this approach, the researchers created the swearing fluency task. This task requires volunteers to list as many different swear words as they can think of in 1 minute.

By comparing scores from both the verbal and swearing fluency tasks, it was found that the people who scored highest on the verbal fluency test also tended to do best on the swearing fluency task. The weakest in the verbal fluency test also did poorly on the swearing fluency task.

What this correlation suggests is that swearing isn’t simply a sign of language poverty, lack of general vocabulary, or low intelligence.

Instead, swearing appears to be a feature of language that an articulate speaker can use in order to communicate with maximum effectiveness. And actually, some uses of swearing go beyond just communication.

Video via Keele University

Natural pain relief

Research we conducted involved asking volunteers to hold their hand in iced water for as long as they could tolerate, while repeating a swear word.

The same set of participants underwent the iced water test on a separate occasion, but this time they repeated a neutral, non-swear word. The heart rate of both sets of participants was monitored.

What we found was that those who swore withstood the pain of the ice-cold water for longer, rated it as less painful, and showed a greater increase in heart rate when compared to those who repeated a neutral word.

This suggests they had an emotional response to swearing and an activation of the fight or flight response: a natural defence mechanism that not only releases adrenalin and quickens the pulse, but also includes a natural pain relief known as stress-induced analgesia.

This research was inspired by the birth of my daughter when my wife swore profusely during agonising contractions. The midwives were surprisingly unfazed, and told us that swearing is a normal and common occurrence during childbirth – perhaps for reasons similar to our iced water study.

Two-way emotional relationship

We wanted to further investigate how swearing and emotion are linked. Our most recent study aimed to assess the opposite of the original research, so instead of looking at whether swearing induced emotion in the speaker we examined whether emotion could cause an increase in swearing fluency.

Participants were asked to play a first person shooter video game in order to generate emotional arousal in the laboratory. They played for ten minutes, during which they explored a virtual environment and fought and shot at a variety of enemies.

We found that this was a successful way to arouse emotions, since the participants reported feeling more aggressive afterwards when compared with those who played a golf video game.

Next, the participants undertook the swearing fluency task. As predicted, the participants who played the shooting game were able to list a greater number of swear words than those who played the golf game.

This confirms a two-way relationship between swearing and emotion. Not only can swearing provoke an emotional response, as shown with the iced water study, but emotional arousal can also facilitate greater swearing fluency.

What this collection of studies shows is that there is more to swearing than simply causing offence, or a lack of verbal hygiene. Language is a sophisticated toolkit, and swearing is a part of it.

Unsurprisingly, many of the final words of pilots killed in air-crashes captured on the ‘black box’ flight recorder feature swearing. And this emphasises a crucial point, that swearing must be important given its prominence in matters of life and death.

The fact is that the size of your vocabulary of swear words is linked with your overall vocabulary, and swearing is inextricably linked to the experience and expression of feelings and emotions.The Conversation

Richard Stephens, Senior Lecturer in Psychology, Keele University

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article was also posted on ScienceAlert

 

 

 

Tiny, 540-Million-Year-Old Human Ancestor Didn’t Have an Anus

Tiny, 540-Million-Year-Old Human Ancestor Didn't Have an Anus
A scanning election microscope (SEM) took this detailed image of the deuterostome with the extra-large mouth.

Credit: Jian Han


A speck-size creature without an anus is the oldest known prehistoric ancestor of humans, a new study finds.

Researchers found the remains of the 540-million-year-old critter — a bag-like sea organism — in central China. The creature is so novel, it has its own family (Saccorhytidae), as well as its own genus and species (Saccorhytus coronaries), named for its wrinkled, sac-like body. (“Saccus” means “sac” in Latin, and “rhytis” means “wrinkle” in Greek.)

S. coronaries, with its oval body and large mouth, is likely a deuterostome, a group that includes all vertebrates, including humans, and some invertebrates, such as starfish. [7 Theories on the Origin of Life]

“We think that as an early deuterostome, this may represent the primitive beginnings of a very diverse range of species, including ourselves,” Simon Conway Morris, a professor of evolutionary palaeobiology at the University of Cambridge, said in a statement. “To the naked eye, the fossils we studied look like tiny black grains, but under the microscope the level of detail is jaw-dropping.”

At first glance, however, S. coronaries does not appear to have much in common with modern humans. It was about a millimeter (0.04 inches) long, and likely lived between grains of sand on the seafloor during the early Cambrian period.

While the mouth onS. coronaries was large for its teensy body, the creature doesn’t appear to have an anus. [See Images of the Bag-Like Animal & Other Cambrian Creatures]

“If that was the case, then any waste material would simply have been taken out back through the mouth, which from our perspective sounds rather unappealing,” Conway Morris said.

Other deuterostome groups are known from about 510 million to 520 million years ago, a time when they had already started to evolve into vertebrates, as well as sea squirts, echinoderms (starfish and sea urchins) and hemichordates (a group that includes acorn worms).

However, these incredibly diverse animals made it hard for scientists to figure out what the common deuterostome ancestor would have looked like, the researchers said.

The newfound microfossils answered that question, they said. The researchers used an electron microscope and a computed tomography (CT) scan to construct an image of S. coronaries.

“We had to process enormous volumes of limestone — about 3 tonnes [3 tons] — to get to the fossils, but a steady stream of new finds allowed us to tackle some key questions: Was this a very early echinoderm or something even more primitive?” study co-researcher Jian Han, a paleontologist at Northwest University in China, said in the statement. “The latter now seems to be the correct answer.”

The analysis indicated that S. coronaries had a bilaterally symmetrical body, a characteristic it passed down to its descendants, including humans. It was also covered with a thin, flexible skin, suggesting it had muscles of some kind that could perhaps help it wriggle around in the water and engulf food with its large mouth, the researchers said.

Small, conical structures encircling its mouth may have allowed water it swallowed to escape from its body. Perhaps these structures were the precursor of gill slits, the researchers said.

Now that researchers know that deuterostomes existed 540 million years ago, they can try to match the timing to estimates from biomolecular data, known as the “molecular clock.”

Theoretically, researchers can determine when two species diverged by quantifying the genetic differences between them. If two groups are distantly related, for instance, they should have extremely different genomes, the researchers said.

However, there are few fossils from S. coronaries‘ time, making it difficult to match the molecular clocks of other animals to this one, the researchers said. This may be because animals before deuterostomes were simply too miniscule to leave fossils behind, they said.

The findings were published online today (Jan. 30) in the journal Nature.

In another paper, researchers reported on the discovery of another type of tiny animal fossil from the late Cambrian. These creatures, called loriciferans, measured about 0.01 inches (0.3 mm) and, like S. coronaries, lived between grains of sand, the researchers said in a study published online today in the journal Nature Ecology and Evolution.

The newly identified species, Eolorica deadwoodensis, discovered in western Canada, shows when multicellular animals began living in areas once inhabited by single-celled organisms, the researchers said.

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

This article was originally posted on livescience.com

By Laura Geggel

 

 

 

Your appendix might serve an important biological function after all

appendix-1

One of the first things you learn about evolution in school is that the human body has a number of ‘vestigial’ parts – appendix, wisdom teeth, tailbone – that gradually fell out of use as we adapted to more advanced lifestyles than our primitive ancestors.

But while our wisdom teeth are definitely causing us more pain than good right now, the human appendix could be more than just a ticking time bomb sitting in your abdomen. A new study says it could actually serve an important biological function – and one that humans aren’t ready to give up.

Researchers from Midwestern University traced the appearance, disappearance, and reemergence of the appendix in several mammal lineages over the past 11 million years, to figure out how many times it was cut and bought back due to evolutionary pressures.

They found that the organ has evolved at least 29 times – possibly as many as 41 times – throughout mammalian evolution, and has only been lost a maximum of 12 times.

“This statistically strong evidence that the appearance of the appendix is significantly more probable than its loss suggests a selective value for this structure,” the team reports.

“Thus, we can confidently reject the hypothesis that the appendix is a vestigial structure with little adaptive value or function among mammals.”

If the appendix has been making multiple comebacks in humans and other mammals across millions of years, what exactly is it good for?

Conventional wisdom states that the human appendix is the shrunken remnant of an organ that once played an important role in a remote ancestor of humans millions of years ago.

The reason it still exists – and occasionally has to be removed due to potentially fatal inflammation and rupturing – is that it’s too ‘evolutionarily expensive’ to get rid of altogether. There’s little evolutionary pressure to lose such a significant part of the body.

In other words, the amount of effort it would take for the human species to gradually lose the appendix though thousands of years of evolution is just not worth it, because in the majority of people, it just sits there not hurting anyone.

But what if it’s doing more than just sitting there?

For years now, researchers have been searching for a possible function of the human appendix, and the leading hypothesis is that it’s a haven for ‘good’ intestinal bacteria that help us keep certain infections at bay.

One of the best pieces of evidence we’ve had for this suggestion is a 2012 study, which found that individuals without an appendix were four times more likely to have a recurrence of Clostridium difficile colitis – a bacterial infection that causes diarrhoea, fever, nausea, and abdominal pain.

As Scientific American explains, recurrence in individuals with their appendix intact occurred in 11 percent of cases reported at the Winthrop-University Hospital in New York, while recurrence in individuals without their appendix occurred in 48 percent of cases.

Now the Midwestern University team has taken a different approach to arrive at the same conclusion.

First they gathered data on the presence or absence of the appendix and other gastrointestinal and environmental traits across 533 mammal species over the past 11,244 million years.

Onto each genetic tree for these various lineages, they traced how the appendix evolved through years of evolution, and found that once the organ appeared, it was almost never lost.

“[T]he appendix has evolved independently in several mammal lineages, over 30 separate times, and almost never disappears from a lineage once it has appeared,” the team explains in a press statement.

“This suggests that the appendix likely serves an adaptive purpose.”

Next, the researchers considered various ecological factors – the species’ social behaviours, diet, habitat, and local climate – to figure out what that “adaptive purpose” could be.

They found that species that had retained or regained an appendix had higher average concentrations of lymphoid (immune) tissue in the cecum – a small pouch connected to the junction of the small and large intestines.

This suggests that the appendix could play an important role in a species’ immune system, particularly as lymphatic tissue is known to stimulate the growth of certain types of beneficial gut bacteria.

“While these links between the appendix and cecal factors have been suggested before, this is the first time they have been statistically validated,” the team concludes in their paper.

“The association between appendix presence and lymphoid tissue provides support for the immune hypothesis of appendix evolution.”

The study is far from conclusive, but offers a different perspective on the hypothesis that humans have been keeping the appendix around for its immune support this whole time.

The challenge now is to prove it, which is easier said than done, seeing as most people who have had their appendix removed don’t suffer from any adverse long-term effects.

But it could be that when people get their appendix removed, immune cell-producing tissues in the cecum and elsewhere in the body step up to compensate for the loss.

One thing’s for sure in all of this – while we’re probably not going to regain our tails, it’s too soon to write off the appendix just yet.

Join our fans by liking us on Facebook, or follow us on Twitter, Google+, feedly, flipboardand Instagram.

Check out our Flipboard magazine ESIST  If you enjoy reading our posts. All of our handpicked articles will be delivered to the flipboard magazine every day.

This article was originally posted on ScienceAlert

by BEC CREW

 

 

Scientists have mapped an human civilisations, underwater Stone Age settlement 

If you want to study ancient human civilisations, you might want to start by getting a diving certification, because many now lie underwater, thanks to our warmer climate.

One of the most intriguing of these communities dates back to the Neolithic, or New Stone Age, period, and lies totally submerged, around 20 metres (65 feet) below sea level in Hanö Bay, off the southern coast of Sweden.

The 9,000-year-old settlement was first discovered seven years ago, but has just been mapped by scientists from Lund University in Sweden.

The settlement existed at crucial time in human history, and the researchers hope that by understanding who lived there, and how they lived, they’ll get some insight into how humans conquered the globe.

“If you want to fully understand how humans dispersed from Africa, and their way of life, we also have to find all their settlements,” said team member Anton Hansson.

“As geologists, we want to recreate this area and understand how it looked. Was it warm or cold? How did the environment change over time?”

In the video below, you can watch as researchers explore the incredible settlement, which remains unnamed:

Although it seems weird for an ancient settlement to end up underwater, it’s actually not that uncommon, seeing as humans favour living near the coast – leaving their homes vulnerable to being swallowed up as sea levels rise.

“Quite a few of these [settlements] are currently underwater, since the sea level is higher today than during the last glaciation. Humans have always preferred coastal sites,” explains Hansson.

Other than natural sea level rise, quite a few other settlements have sunk into the ocean thanks to other powerful geological events.

For example, the Greek island of Santorini was once a larger island in the Aegean Sea, and home to Akrotiri, a Late Bronze Age outpost of Minoan civilisation, which preceded ancient Greece.

But in the 17th century, a nearby volcanic explosion triggered a tsunami that wiped the Akrotiri civilisation off the map, and broke Santorini into a few smaller islands – an event that researchers are still studying to this day.

When Stone Age humans lived at site the Lund University team is currently studying some 9,000 years ago, geologists think it was home to a lagoon, making it popular for fishermen. That hypothesis is backed up by a series of extremely well-preserved fishing traps found at the site.

Besides the fishing traps, the team has also uncovered a 9,000-year-old pickaxemade from elk antlers, and have successfully taken samples sediment for radiocarbon dating.

Currently, the team has just completed a bathymetrical (or depth) map of the region, allowing them to better understand how the area might have looked when humans lived there.

Even with so much data already recorded, there are still a lot of unanswered questions about the settlement’s inhabitants. Did they live there year-round? Was fishing their primary way of life?

As the careful excavation continues, more details will be exposed, but until then, at least we have some awesome footage to hold us over.

Join our fans by liking us on Facebook, or follow us on TwitterGoogle+feedlyflipboard and Instagram.

 

 

 

 

The macabre fate of ‘beating heart corpses’

Their hearts are still beating. They urinate. Their bodies don’t decompose and they are warm to the touch; their stomachs rumble, their wounds heal and their guts can digest food. They can have heart attacks, catch a fever and suffer from bedsores. They can blush and sweat – they can even have babies.

And yet, according to most legal definitions and the vast majority of doctors these patients are thoroughly, indisputably deceased.

These are the beating heart cadavers; brain-dead corpses with functioning organs and a pulse. Their medical costs are astronomical (up to $217,784 for just a few weeks), but with a bit of luck and a lot of help, today it’s possible for the body to survive for months – or in rare cases, decades – even though it’s technically dead. How is this possible? Why does this happen? And how do doctors know they’re really dead?

Premature burials

Identifying the dead has never been easy. In 19th Century France there were 30 theories about how to tell if someone had passed away – including attaching pincers to their nipples and putting leeches in their bottom. Elsewhere, the most reliable methods included yelling a patient’s name (if the patient ignored them three times, they were dead) or thrusting mirrors under their noses to see if they fogged up.

(Credit: Getty Images)

Early attempts to test for signs of life included attaching pincers to nipples (Credit: Getty Images)

Suffice to say, the medical establishment wasn’t convinced about any of them. Then in 1846, the Academy of Sciences in Paris launched a competition for “’the best work on the signs of death and the means of preventing premature burials” and a young doctor tried his luck. Eugène Bouchut figured that if a person’s heart had stopped beating, they were surely dead. He suggested using the newly invented stethoscope to listen for a heartbeat – if the doctor didn’t hear anything for two minutes, they could be safely buried.

He won the competition and his definition of “clinical death” stuck, eventually to be immortalised in films, books and popular wisdom. “There wasn’t much that could be done, so basically anyone could look at a person, check for a pulse and decide whether they were dead or alive,” says Robert Veatch from the Kennedy Institute of Ethics.

But a chance discovery in the 1920s made things decidedly messier. An electrical engineer from Brooklyn, New York, had been investigating why people die after they’ve been electrocuted – and wondered if the right voltage might also jolt them back to life. William Kouwenhoven devoted the next 50 years to finding a way to make it happen, work which eventually led to the invention of the defibrillator.

(Credit: Getty Images)

The loss of heart beat was once considered a sign of death, but we now know this need not be the end (Credit: Getty Images)

It was the first of a deluge of revolutionary new techniques, including mechanical ventilators and feeding tubes, catheters and dialysis machines. For the first time, you could lack certain bodily functions and still be alive. Our understanding of death was becoming unstuck.

The invention of the EEG – which can be used to identify brain activity – dealt the final blow. Starting in the 1950s, doctors across the globe began discovering that some of their patients, who they had previously considered only comatose, in fact had no brain activity at all. In France the mysterious phenomenon was termed coma dépasse, meaning literally “a state beyond coma”. They had discovered the ‘beating-heart cadavers’, people whose bodies were alive though their brains were dead.

This was an entirely new category of patient, one which overturned 5,000 years of medical understanding in a single sweep, raising new questions about how death is identified and dredging up some thorny philosophical, ethical and legal issues to boot.

“It goes back and forth as to what people call them but I think patient is the correct term,” says Eelco Wijdicks, a neurologist from Rochester, Minnesota.

(Credit: Science Photo Library)

Beating heart cadavers should not be confused with coma patients or those in a vegetative state (Credit: Science Photo Library)

These beating heart cadavers should not be confused with other kinds of unconscious patients, such as those in a coma. Though they aren’t able to sit up and respond to the sound of their name, they still show brain activity, undergoing cycles of sleep and (unresponsive) wakefulness. A patient in a coma has the potential to make a full recovery.

A persistent vegetative state is decidedly more serious – in these patients the higher brain is permanently, irretrievably damaged – but though they will never have another conscious thought, again, they are not dead.

To qualify as a beating heart cadaver, the entire brain must be dead. This includes the “brain stem”, the primitive, tube-shaped mass at the bottom of the brain which controls critical bodily functions, such as breathing. But, somewhat disconcertingly, our other organs aren’t as troubled by the death of their HQ as you’d think.

Alan Shewmon, a neurologist from UCLA and outspoken critic of the brain death definition, identified 175 cases where people’s bodies survived for more than a week after the person had died. In some cases, their hearts kept beating and their organs kept functioning for a further 14 years – for one cadaver, this strange afterlife lasted two decades.

How is this possible?

In fact, biologically speaking, there has never been a single moment of death; each passing is really a series of mini-deaths, with different tissues dropping off at different rates. “Choosing a definition of death is essentially a religious or philosophical question,” says Veatch.

(Credit: Getty Images)

The brain uses up 25% of our body’s oxygen, meaning it is the first organ to die after we stop breathing (Credit: Getty Images)

For centuries, soldiers, butchers and executioners have observed how certain body parts may continue twitching after decapitation or dismemberment. Even long before life support, 19th Century physicians related accounts of patients whose hearts had continued to beat for several hours after they stopped breathing.

At times, this slow decline can have alarming consequences. One example is the Lazarus sign, an automatic reflex first reported in 1984. The reflex causes the dead to sit up, briefly raise their arms and drop them, crossed, onto their chests. It happens because while most reflexes are mediated by the brain, some are overseen by “reflex arcs”, which travel through the spine instead. In addition to the Lazarus reflex, corpses also have the knee-jerk reflex intact.

Further along the life-death continuum, skin and brain stem cells are known to remain alive for several days after a person has died. Living muscle stem cells have been found in corpses which are two-and-a-half-weeks old.

Even our genes keep going long after we’ve taken our last breath. Earlier this year, scientists discovered thousands which spring to life days after death, including those involved in inflammation, counteracting stress and – mysteriously – embryonic development.

Beating heart cadavers can only exist because of this lopsided decline – it’s all dependent on the brain dying first. To get to grips with why this happens, consider this. Though the brain makes up just 2% of a person’s body weight, it sucks up a staggering 25% of all its oxygen.

Neurons are so high-maintenance in part because they are active all the time. They are constantly pumping out ions to create miniature electrical gradients between their insides and the surrounding environment; to fire, they simply open up the floodgates and let the ions flow back in.

The trouble is, they can’t stop pumping. If their efforts are stalled by a lack of oxygen, neurons are rapidly inundated with ions which build to toxic levels, causing irreversible damage. This “ischaemic cascade” explains why if you accidentally lop off a finger, it can usually be sewn back on, but most people can’t hold their breath for more than a few minutes without fainting.

(Credit: Science Photo Library)

Doctors now follow standard procedures to test for lingering signs of life (Credit: Science Photo Library)

Which brings us back to that perennial medical problem: if your heart’s still beating, how can doctors tell you’re dead? To begin with, doctors identified victims of coma dépasse by checking for the absence of brain activity on an EEG. But there was a problem.

Colleen Burns woke up just as doctors were about to remove her organs

Alarmingly, alcohol, anaesthesia, some illnesses (such as hypothermia) and many drugs (including Valium) can shut down brain activity, conning doctors into thinking their patient is dead. In 2009, Colleen Burns was found in a drug-induced coma and doctors at a hospital in New York thought she was dead. She woke up in the operating room the day before doctors were due to remove her organs (NB: it’s unlikely this would have gone ahead, because her doctors had planned additional tests before the surgery).

Several decades earlier in 1968, a group of esteemed Harvard doctors called an emergency meeting to discuss exactly this. Over the course of several months, they devised a set of foolproof criteria which would allow doctors to avoid such blunders and establish that beating heart cadavers were definitely dead.

The tests remain the global standard today, though some of them look uncannily like those from the 19th Century. For a start, a patient should be “unresponsive to verbal stimuli”, such as yelling their name. And though leeches and nipple pincers are out, they should remain unresponsive despite numerous uncomfortable procedures, including injecting ice-cold water into one of their ears – a technique which aims to trigger an automatic reflex and make the eyes move. This particular test is so valuable it won its discoverer a Nobel Prize.

Finally, the patient shouldn’t be able to breathe on their own, since this is a sure sign that their primitive brain is still going. In the case of Burns, the horrifying incident was only possible because her doctors ignored tell-tale signs that she was alive; she curled her toes when they were touched, moved her mouth and tongue and was breathing independently, though she was hooked up to a respirator. Had they followed the Harvard criteria correctly, they would never have declared her dead.

Cadaver donor management

You might expect all medical treatment to stop after someone is considered dead – even if they are a beating heart cadaver – but that’s not quite true. Today beating heart cadavers have spawned a strange new medical specialty, “cadaver donor management”, which aims to improve the success of transplants by tending to the health of the dead. The aim of the game is to fool the body into thinking everything is fine until recipients are lined up and their surgeons are ready.

In all, nearly twice as many viable organs – around 3.9 per cadaver– are retrieved from these donors compared to those without a pulse and they’re currently the only reliable source of hearts for transplant.

(Credit: Science Photo Library)

Beating heart cadavers may be given sophisticated treatments to preserve the organs for donation (Credit: Science Photo Library)

Intriguingly, the part of the brain that the body misses most is not its primitive stem or, as we’d like to think, the wrinkled seat of human consciousness (the cortex), but the hypothalamus. The almond-shaped structure monitors levels of important hormones, including those which regulate a person’s blood pressure, appetite, circadian rhythms, sugar levels, fluid balance and energy expenditure – then makes them, or instructs the pituitary gland to do so.

Instead the hormones must be provided by intensive care teams, who add just enough to an intravenous drip as and when they are needed. “It’s not just a case of putting them on a ventilator and giving them some food – it’s far more than that,” says Wijdicks.

Once the consent forms have been signed, dead patients receive the best medical care of their lives

Of course, not everyone is comfortable with the idea. To some, organ donor management reduces human beings to mere collections of organs to be stripped for parts. As journalist Dick Teresi cynically put it, once the consent forms have been signed, dead patients receive the best medical care of their lives.

These interventions are only possible because the Harvard tests promise to sort the dead and the living into neat boxes – but alas, yet again death is messier than we’d like to think. In a review of 611 patients diagnosed as brain dead using their criteria, scientists found brain activity in 23%. In another study, 4% had sleep-like patterns of activity for up to a week after they had died. Others have reported beating heart cadavers flinching under the surgeon’s knife and there have even been suggestions that they should be given an anaesthetic – though this is controversial.

(Credit: Getty Images)

The exact definition of death depends on our culture and religion (Credit: Getty Images)

To inject further controversy into the mix, some people don’t even agree with the definition in principle, let alone in practice. In the United States, many Orthodox Jews, some Roman Catholics and certain ethnic minorities – in total, around 20% of the population – like their dead with a flat-lining heart rate and cold to the touch. “There’s this group of people who quite militantly are offended when a doctor tries to pronounce death on someone that the family thinks are still alive,” says Veatch.

“Even with clinical death, there are disputes – for instance about how long it’s necessary for circulation to be lost before it’s impossible for it to be restored. We use five minutes in the US but there isn’t really good evidence that that’s the right number,” says Veatch.

At the heart of many legal struggles is the right to choose your own definition of death and when life support should be removed, issues Veatch is particularly passionate about. “I have consistently supported individuals who would insist on a circulatory definition, though that’s not the one I would use,” he says.

Where it gets particularly sticky is if the victim is pregnant. In these cases, the patient’s family have a heart-breaking choice to make. They can either accept that they’ve lost her unborn baby, or begin the intensive and often gruesome battle to keep her going long enough to deliver, which is usually when the foetus is about 24-weeks-old.

Back in 2013, Marlise Munoz was found unconscious at her home in Texas. Her doctors suspected that she had suffered a pulmonary embolism and discovered that she was 14 weeks pregnant. Two days later she was declared dead. Munoz was a paramedic and had previously told her husband that in case of brain death, she would not want to be kept alive artificially. He petitioned to have her life support removed – but the hospital refused.

(Credit: Science Photo Library)

A beating heart cadaver may still be able to sustain a growing foetus (Credit: Science Photo Library)

“In Texas there’s an automatic invalidation of a pregnant woman’s advanced directive. If she wanted them to withdraw life-sustaining treatment, then when she died that would not be allowed – that would be ripped up. She would be provided life-sustaining treatment,” says Christopher Burkle, an anaesthetist from Rochester, Minnesota who co-authored a paper on the subject with Wijdicks.

The circumstances are extremely rare, with only about 30 reported cases between 1982 and 2010, but the tug-of-war between the interests of the mother and those of her unborn baby begs the question: which human rights should we retain when we’re dead?

“In the US a dead patient still has rights to the protection of their medical information, for example. You can’t publish their medical record on the 6 o’clock news – a person who is dead has privacy rights in that respect. It’s not a huge jump to suggest that rights be maintained in other avenues for a dead person,” says Burkle.

And things may be about to get a lot more complicated. At the moment, doctors are bound by the “dead donor rule”, which asserts that no organs can be removed until a person is dead – that is, totally brain-dead or with a heart which has already stopped beating. But some people, including Veatch, think this should change.

They have proposed the “higher brain” definition, which means a person isn’t dead when their heart stops beating, or even when they stop breathing – a person is dead when they lose their “personhood”. Those with crucial parts of their brains intact and the ability to breathe independently would be dead so long as they could no longer have conscious thoughts.

By loosening up the definition a little further, transplant doctors would have access to a much larger pool of potential donors than they do at the moment and save countless lives.

Death isn’t an event, it’s a process – but after thousands of years of trying, we’re still searching for something more definitive. It doesn’t look like this is about to end any time soon.

Check out our Flipboard magazine ESIST  If you enjoy reading our posts. All of our handpicked articles will be delivered to the flipboard magazine every day.

Original article on BBC

By Zaria Gorvett

 

 

 

3 Human Chimeras That Already Exist – ESIST

2

The news that researchers want to create human-animal chimeras has generated controversy recently, and may conjure up ideas about Frankenstein-ish experiments. But chimeras aren’t always man-made—and there are a number of examples of human chimeras that already exist.

A chimera is essentially a single organism that’s made up of cells from two or more “individuals”—that is, it contains two sets of DNA, with the code to make two separate organisms.

One way that chimeras can happen naturally in humans is that a fetus can absorb its twin. This can occur with fraternal twins, if one embryo dies very early in pregnancy, and some of its cells are “absorbed” by the other twin. The remaining fetus will have two sets of cells, its own original set, plus the one from its twin. [Seeing Double: 8 Fascinating Facts About Twins]

These individuals often don’t know they are a chimera. For example, in 2002, news outlets reported the story of a woman named Karen Keegan, who needed a kidney transplant and underwent genetic testing along with her family, to see if a family member could donate one to her. But the tests suggested that genetically, Keegan could not be the mother of her sons. The mystery was solved when doctors discovered that Keegan was a chimera—she had a different set of DNA in her blood cells compared to the other tissues in her body.

A person can also be a chimera if they undergo a bone marrow transplant. During such transplants, which can be used for example to treat leukemia, a person will have their own bone marrow destroyed and replaced with bone marrow from another person. Bone marrow contains stem cells that develop into red blood cells. This means that a person with a bone marrow transplant will have blood cells, for the rest of their life, that are genetically identical to those of the donor, and are not genetically the same as the other cells in their own body.

Continue Reading

By Rachael Rettner

 

Source: 3 Human Chimeras That Already Exist – Scientific American

Researchers shed light on evolutionary mystery: Origins of the female orgasm

Female orgasm seems to be a happy afterthought of our evolutionary past when it helped stimulate ovulation, a new study of mammals shows.

The role of female orgasm, which plays no obvious role in human reproduction, has intrigued scholars as far back as Aristotle. Numerous theories have tried to explain the origins of the trait, but most have concentrated on its role in human and primate biology.

Now scientists at Yale and the Cincinnati Children’s Hospital have provided fresh insights on the subject by examining the evolving trait across different species. Their study appears Aug. 1 in the journal JEZ-Molecular and Developmental Evolution.

“Prior studies have tended to focus on evidence from human biology and the modification of a trait rather than its evolutionary origin,” said Gunter Wagner, the Alison Richard Professor of Ecology and Evolutionary biology, and a member of Yale’s Systems Biology Institute.

Instead, Wagner and Mihaela Pavličev of the Center for Prevention of Preterm Birth at Cincinnati Children’s Hospital propose that the trait that evolved into human female orgasm had an ancestral function in inducing .

Since there is no apparent association between orgasm and number of offspring or successful reproduction in humans, the scientists focused on a specific physiological trait that accompanies human female orgasm—the neuro-endocrine discharge of prolactin and oxytocin—and looked for this activity in other placental mammals. They found that in many mammals this reflex plays a role in ovulation.

In spite of the enormous diversity of mammalian reproductive biology, some core characteristics can be traced throughout mammalian evolution, note the researchers. The female ovarian cycle in humans, for instance, is not dependent upon sexual activity. However, in other mammalian species ovulation is induced by males. The scientists’ analysis shows male-induced ovulation evolved first and that cyclical or spontaneous ovulation is a derived trait that evolved later.

Continue Reading 

Article photo via: clarebainstudio

Source: Researchers shed light on evolutionary mystery: Origins of the female orgasm