A giant eyeball from a mysterious sea creature was found by a man walking the beach in Pompano Beach on Wednesday. Wildlife officials said it likely came from a swordfish but experts mused that it likely came from a giant squid, whale or large fish. (Source)
About 400 million years ago, before trees were common, the Earth was covered with giant mushrooms. Source
Science fiction writers and producers of TV medical dramas: have you ever needed to invent a serious-sounding disease whose symptoms, progression, and cure you can utterly control? Artificial intelligence can help!
Blog reader Kate very kindly compiled a list of 3,765 common names for conditions from this site, and I gave them to an open-source machine learning algorithm called a recursive neural network, which learns to imitate its training data. Given enough examples of real-world diseases, a neural network should be able to invent enough plausible-sounding syndromes to satisfy any hypochondriac.
Early on in the training, the neural network was producing what were identifiably diseases, but probably wouldn’t fly in a medical drama. “I’m so sorry. You have… poison poison tishues.”
Much Esophageal Eneetems Vomania Poisonicteria Disease Eleumathromass Sexurasoma Ear Allergic Antibody Insect Sculs Poison Poison Tishues Complex Disease
As the training got going, the neural network began to learn to replicate more of the real diseases - lots of ventricular syndromes, for example. But the made-up diseases still weren’t too convincing, and maybe even didn’t sound like diseases at all. (Except for RIP Syndrome. I’d take that one seriously)
Seal Breath Tossy Blanter Cancer of Cancer Bull Cancer Spisease Lentford Foot Machosaver RIP Syndrome
The neural network eventually progressed to a stage where it was producing diseases of a few basic varieties :
First kind of disease: This isn’t really a disease. The neural network has just kind of named a body part, or a couple of really generic disease-y words. Pro writer tip: don’t use these in your medical drama.
Fevers Heading Disorder Rashimia Causes Wound Eye Cysts of the Biles Swollen Inflammation Ear Strained Lesions Sleepys Lower Right Abdomen Degeneration Disease Cancer of the Diabetes
Second kind of disease: This disease doesn’t exist, and sounds reasonably convincing to me, though it would probably have a different effect on someone with actual medical training.
Esophagia Pancreation Vertical Hemoglobin Fever Facial Agoricosis Verticular Pasocapheration Syndrome Agpentive Colon Strecting Dissection of the Breath Bacterial Fradular Syndrome Milk Tomosis Lemopherapathy Osteomaroxism Lower Veminary Hypertension Deficiency Palencervictivitis Asthodepic Fever Hurtical Electrochondropathy Loss Of Consufficiency Parpoxitis Metatoglasty Fumple Chronosis Omblex's Hemopheritis Mardial Denection Pemphadema Joint Pseudomalabia Gumpetic Surpical Escesion Pholocromagea Helritis and Flatelet’s Ear Asteophyterediomentricular Aneurysm
Third kind of disease: Sounds both highly implausible but also pretty darn serious. I’d definitely get that looked at.
Ear Poop Orgly Disease Cussitis Occult Finger Fallblading Ankle Bladders Fungle Pain Cold Gloating Twengies Loon Eye Catdullitis Black Bote Headache Excessive Woot Sweating Teenagerna Vain Syndrome Defentious Disorders Punglnormning Cell Conduction Hammon Expressive Foot Liver Bits Clob Sweating,Sweating,Excessive Balloblammus Metal Ringworm Eye Stools Hoot Injury Hoin and Sponster Teenager’s Diarey Eat Cancer Cancer of the Cancer Horse Stools Cold Glock Allergy Herpangitis Flautomen Teenagees Testicle Behavior Spleen Sink Eye Stots Floot Assection Wamble Submoration Super Syndrome Low Life Fish Poisoning Stumm Complication Cat Heat Ovarian Pancreas 8 Poop Cancer Of Hydrogen Bingplarin Disease Stress Firgers Causes of the ladder Exposure Hop D Treat Decease
Diseases of the fourth kind: These are the, um, reproductive-related diseases. And those that contain unprintable four-letter words. They usually sound ludicrous, and entirely uncomfortable, all at the same time. And I really don’t want to print them here. However! If you are in possession of a sense of humor and an email address, you can let me know here and I’ll send them to you.
On Thursday (Feb. 11, 2016) at 10:30 a.m. ET, the National Science Foundation will gather scientists from Caltech, MIT and the LIGO Scientific Collaboration in Washington D.C. to update the scientific community on the efforts being made by the Laser Interferometer Gravitational-wave Observatory (LIGO) to detect gravitational waves.
But why is this exciting? And what the heck are “gravitational waves”?
Keep reading
Genetically modified organisms get a bad rap for many reasons, but we’ve actually been genetically altering what we eat since the dawn of human history.
“For 10,000 years, we have altered the genetic makeup of our crops,”explains UC Davis plant pathology professor Pamela Ronald.
“Today virtually everything we eat is produced from seeds that we have genetically altered in one way or another.” (You can read more about Ronald’s thoughts on genetically engineered food here.)
Right now her focus is on rice. It’s one of our basic crops and without it, we would struggle to feed much of the world.
With climate change, we’re seeing an increase in flooding in places like India and Bangladesh, which makes it harder to grow this important food staple.
So Ronald and her lab have developed a flood-tolerant strain of rice. It’s known as Sub1a or “scuba rice” and millions of farmers in South Asia are now growing it in their fields.
Today is National Food Day, a day dedicated to hunger awareness. But as we focus on food insecurity, we need to talk more about how global warming will make the problem worse.
As our climate continues to heat up, it has huge impacts on what foods we are able to grow. Will our crops be able to survive droughts and floods? The University of California leads six labs that are working to develop other climate-resilient crops including chickpea, cowpea and millet.
Find out what other scientists are doing to improve our food.
Now we know the (proposed) names of the four new elements, here’s an updated graphic with more information on each! High-res image/PDF: http://wp.me/p4aPLT-1Eg
In the system AR Scorpii a rapidly spinning white dwarf star powers electrons up to almost the speed of light. These high energy particles release blasts of radiation that lash the companion red dwarf star, and cause the entire system to pulse dramatically every 1.97 minutes with radiation ranging from the ultraviolet to radio.
The star system AR Scorpii, or AR Sco for short, lies in the constellation of Scorpius, 380 light-years from Earth. It comprises a rapidly spinning white dwarf, the size of Earth but containing 200,000 times more mass, and a cool red dwarf companion one third the mass of the Sun, orbiting one another every 3.6 hours in a cosmic dance as regular as clockwork.
Read more at: cosmosmagazine / astronomynow
photos of sakurajima, the most active volcano in japan, by takehito miyatake and martin rietze. volcanic storms can rival the intensity of massive supercell thunderstorms, but the source of the charge responsible for this phenomenon remains hotly debated.
in the kind of storm clouds that generate conventional lightning, ice particles and soft hail collide, building up positive and negative charges, respectively. they separate into layers, and the charge builds up until the electric field is high enough to trigger lightning.
but the specific mechanism by which particles of differing charges are separated in the ash cloud is still unknown. lightning has been observed between the eruption plume and the volcano right at the start of an eruption, suggesting that there are processes that occur inside the volcano to lead to charge separation.
volcanic lightning could yield clues about the earth’s geological past, and could answer questions about the beginning of life on our planet. volcanic lightning could have been the essential spark that converted water, hydrogen, ammonia, and methane molecules present on a primeval earth into amino acids, the building blocks of life.
People who are blind from birth will gesture when they speak. I always like pointing out this fact when I teach classes on gesture, because it gives us an an interesting perspective on how we learn and use gestures. Until now I’ve mostly cited a 1998 paper from Jana Iverson and Susan Goldin-Meadow that analysed the gestures and speech of young blind people. Not only do blind people gesture, but the frequency and types of gestures they use does not appear to differ greatly from how sighted people gesture. If people learn gesture without ever seeing a gesture (and, most likely, never being shown), then there must be something about learning a language that means you get gestures as a bonus.
Blind people will even gesture when talking to other blind people, and sighted people will gesture when speaking on the phone - so we know that people don’t only gesture when they speak to someone who can see their gestures.
Earlier this year a new paper came out that adds to this story. Şeyda Özçalışkan, Ché Lucero and Susan Goldin-Meadow looked at the gestures of blind speakers of Turkish and English, to see if the *way* they gestured was different to sighted speakers of those languages. Some of the sighted speakers were blindfolded and others left able to see their conversation partner.
Turkish and English were chosen, because it has already been established that speakers of those languages consistently gesture differently when talking about videos of items moving. English speakers will be more likely to show the manner (e.g. ‘rolling’ or bouncing’) and trajectory (e.g. ‘left to right’, ‘downwards’) together in one gesture, and Turkish speakers will show these features as two separate gestures. This reflects the fact that English ‘roll down’ is one verbal clause, while in Turkish the equivalent would be yuvarlanarak iniyor, which translates as two verbs ‘rolling descending’.
Since we know that blind people do gesture, Özçalışkan’s team wanted to figure out if they gestured like other speakers of their language. Did the blind Turkish speakers separate the manner and trajectory of their gestures like their verbs? Did English speakers combine them? Of course, the standard methodology of showing videos wouldn’t work with blind participants, so the researchers built three dimensional models of events for people to feel before they discussed them.
The results showed that blind Turkish speakers gesture like their sighted counterparts, and the same for English speakers. All Turkish speakers gestured significantly differently from all English speakers, regardless of sightedness. This means that these particular gestural patterns are something that’s deeply linked to the grammatical properties of a language, and not something that we learn from looking at other speakers.
References
Jana M. Iverson & Susan Goldin-Meadow. 1998. Why people gesture when they speak. Nature, 396(6708), 228-228.
Şeyda Özçalışkan, Ché Lucero and Susan Goldin-Meadow. 2016. Is Seeing Gesture Necessary to Gesture Like a Native Speaker? Psychological Science 27(5) 737–747.
Asli Ozyurek & Sotaro Kita. 1999. Expressing manner and path in English and Turkish: Differences in speech, gesture, and conceptualization. In Twenty-first Annual Conference of the Cognitive Science Society (pp. 507-512). Erlbaum.
From retina to cortex: An unexpected division of labor
Neurons in our brain do a remarkable job of translating sensory information into reliable representations of our world that are critical to effectively guide our behavior. The parts of the brain that are responsible for vision have long been center stage for scientists’ efforts to understand the rules that neural circuits use to encode sensory information. Years of research have led to a fairly detailed picture of the initial steps of this visual process, carried out in the retina, and how information from this stage is transmitted to the visual part of the cerebral cortex, a thin sheet of neurons that forms the outer surface of the brain. We have also learned much about the way that neurons represent visual information in visual cortex, as well as how different this representation is from the information initially supplied by the retina. Scientists are now working to understand the set of rules—the neural blueprint— that explains how these representations of visual information in the visual cortex are constructed from the information provided by the retina. Using the latest functional imaging techniques, scientists at MPFI have recently discovered a surprisingly simple rule that explains how neural circuits combine information supplied by different types of cells in the retina to build a coherent, information-rich representation of our visual world.
Vision begins with the spatial pattern of light and dark that falls on the retinal surface. One important function performed by the neural circuits in the visual cortex is the preservation of the orderly spatial relationships of light versus dark that exist on the retinal surface. These neural circuits form an orderly map of visual space where each point on the surface of the cortex contains a column of neurons that each respond to a small region of visual space— and adjacent columns respond to adjacent regions of visual space. But these cortical circuits do more than build a map of visual space: individual neurons within these columns each respond selectively to the specific orientation of edges in their region of visual space; some neurons respond preferentially to vertical edges, some to horizontal edges, and others to angles in between. This property is also mapped in a columnar fashion where all neurons in a radial column have the same orientation preference, and adjacent columns prefer slightly different orientations.
Things would be easy if all the cortex had to do was build a map of visual space: a simple one to one mapping of points on the retinal surface to columns in the cortex would be all that was necessary. But building a map of orientation that coexists with the map of visual space is a much greater challenge. This is because the neurons of the retina do not distinguish orientation in the first step of vision. Instead, information on the orientation of edges must be constructed by neural circuits in the visual cortex. This is done using information supplied from two distinct types of retinal cells: those that respond to increases in light (ON-cells) and those that respond to decreases in light (OFF-cells). Adding to the complexity, orientation selectivity depends on having individual cortical neurons receive their ON and OFF signals from non-overlapping regions of visual space, and the spatial arrangement of these regions determines the orientation preference of the cell. Cortical neurons that prefer vertical edge orientations have ON and OFF responsive regions that are displaced horizontally in visual space, those that prefer horizontal edge orientations have their ON and OFF regions displaced vertically in visual space, and this systematic relationship holds for all other edge orientations.
So cortical circuits face a paradox: How do they take the spatial information from the retina and distort it to create an orderly map of orientation selectivity, while at the same time preserving fine retinal spatial information in order to generate an orderly map of visual space? Nature’s solution might best be called ‘divide and conquer’. By using imaging technologies that allow visualization of the ON and OFF response regions of hundreds of individual cortical neurons, Kuo-Sheng Lee and Sharon Huang in David Fitzpatrick’s lab at MPFI have discovered that fine scale retinal spatial information is preserved by the OFF response regions of cortical neurons, while the ON response regions exhibit systematic spatial displacements that are necessary to build an orderly map of edge orientation. Preserving the detailed spatial information from the retina in the OFF response regions is consistent with evidence that dark elements of natural scenes convey more fine scale information than the light elements, and that OFF retinal neurons have properties that allow them to better extract this information. In addition, Lee et al. show that this OFF-anchored cortical architecture enables emergence of an additional orderly map of absolute spatial phase—a property that hasn’t received much attention from neuroscientists, but computer vision research has shown contains a wealth of information about the visual scene that can be used to efficiently encode spatial patterns, motion, and depth.
While these are important new insights into how visual information is transformed from retina to cortical representations, they pose a host of new questions about the network of synaptic connections that performs this transformation, and the developmental mechanisms that construct it, questions that the Fitzpatrick Lab continues to explore.