About 400 million years ago, before trees were common, the Earth was covered with giant mushrooms. Source
When you walk into Starbucks and you..
I. Smell the coffee aroma (olfactory)..
II. Read the order menu from about 20 feet away (optic) then you..
III. Pupils constrict as you look at items, such as muffins, closer (oculomotor)..
IV. You look up at salesperson then down at your money as you pay (trochlear)..
V. You clench your teeth and touch your face when they called your drink (trigeminal)..
VI. You look side-to-side to see if anyone else has ordered the same drink (abdsucens)..
VII. You smile because you realize this IS your drink (facial), then..
VIII. You hear someone say “You can sit here, we are leaving” (auditory). As you sit..
IX. You taste the sweet whipped cream on the top of your drink (glossopharyngeal)..
X. You say “Ahhhh this is good!” (vagus)..
XI. You look at the person next to you because they heard you and then you shrug your shoulders (spinal accessory)..
XII. When they look away you stick your tongue at them (hypoglossal)!
Birds developed the unique vocal organ that enables them to sing more than 66 million years ago when dinosaurs walked the Earth, a new fossil discovery has shown.
But the earliest syrinx, an arrangement of vibrating cartilage rings at the base of the windpipe, was still a long way from producing the lilting notes of a song thrush or blackbird.
Scientists believe the extinct duck and goose relative that possessed the organ was only capable of making honking noises.
The bird, Vegavis iaai, lived during the Cretaceous era. Although its fossil bones were unearthed from Vega Island in Antarctica in 1992, it was not until three years ago that experts spotted the syrinx.
All birds living today are descended from a particular family of dinosaurs that developed feathers and the ability to fly.
The new discovery suggests the syrinx is another hallmark of birds that was absent from non-avian dinosaurs…
Researchers at King’s College London found that the drug Tideglusib stimulates the stem cells contained in the pulp of teeth so that they generate new dentine – the mineralised material under the enamel.
Teeth already have the capability of regenerating dentine if the pulp inside the tooth becomes exposed through a trauma or infection, but can only naturally make a very thin layer, and not enough to fill the deep cavities caused by tooth decay.
But Tideglusib switches off an enzyme called GSK-3 which prevents dentine from carrying on forming.
Scientists showed it is possible to soak a small biodegradable sponge with the drug and insert it into a cavity, where it triggers the growth of dentine and repairs the damage within six weeks.
The tiny sponges are made out of collagen so they melt away over time, leaving only the repaired tooth.
Herd immunity is the idea that if enough people get immunized against a disease, they’ll create protection for even those who aren’t vaccinated. This is important to protect those who can’t get vaccinated, like immunocompromised children.
You can see in the image how low levels of vaccination lead to everyone getting infected. Medium levels slow down the progression of the illness, but they don’t offer robust protection to the unvaccinated. But once you read a high enough level of vaccination, the disease gets effectively road-blocked. It can’t spread fast enough because it encounters too many vaccinated individuals, and so the majority of the population (even the unvaccinated people) are protected.
Find out more here.
(Image caption: The above image compares the neural activation patterns between images from the participants’ brains when reading “O eleitor foi ao protesto” (observed) and the computational model’s prediction for “The voter went to the protest” (predicted))
Brain “Reads” Sentences the Same in English and Portuguese
An international research team led by Carnegie Mellon University has found that when the brain “reads” or decodes a sentence in English or Portuguese, its neural activation patterns are the same.
Published in NeuroImage, the study is the first to show that different languages have similar neural signatures for describing events and scenes. By using a machine-learning algorithm, the research team was able to understand the relationship between sentence meaning and brain activation patterns in English and then recognize sentence meaning based on activation patterns in Portuguese. The findings can be used to improve machine translation, brain decoding across languages and, potentially, second language instruction.
“This tells us that, for the most part, the language we happen to learn to speak does not change the organization of the brain,” said Marcel Just, the D.O. Hebb University Professor of Psychology and pioneer in using brain imaging and machine-learning techniques to identify how the brain deciphers thoughts and concepts.
“Semantic information is represented in the same place in the brain and the same pattern of intensities for everyone. Knowing this means that brain to brain or brain to computer interfaces can probably be the same for speakers of all languages,” Just said.
For the study, 15 native Portuguese speakers — eight were bilingual in Portuguese and English — read 60 sentences in Portuguese while in a functional magnetic resonance imaging (fMRI) scanner. A CMU-developed computational model was able to predict which sentences the participants were reading in Portuguese, based only on activation patterns.
The computational model uses a set of 42 concept-level semantic features and six markers of the concepts’ roles in the sentence, such as agent or action, to identify brain activation patterns in English.
With 67 percent accuracy, the model predicted which sentences were read in Portuguese. The resulting brain images showed that the activation patterns for the 60 sentences were in the same brain locations and at similar intensity levels for both English and Portuguese sentences.
Additionally, the results revealed the activation patterns could be grouped into four semantic categories, depending on the sentence’s focus: people, places, actions and feelings. The groupings were very similar across languages, reinforcing the organization of information in the brain is the same regardless of the language in which it is expressed.
“The cross-language prediction model captured the conceptual gist of the described event or state in the sentences, rather than depending on particular language idiosyncrasies. It demonstrated a meta-language prediction capability from neural signals across people, languages and bilingual status,” said Ying Yang, a postdoctoral associate in psychology at CMU and first author of the study.
It’s #InternationalKissingDay! Here’s some topical lipstick chemistry. More info/high-res image: http://wp.me/s4aPLT-lipstick