X-ray imaging, PET scans, CT scans, and MRIs are various imaging techniques that are used to capture images of the inside of the body. 🩻
X-ray
— detects bone fractures, certain tumors and other abnormal masses, pneumonia, some types of injuries, calcifications, foreign objects, or dental problems.
MRA
— Magnetic Resonance Angiography uses a powerful magnetic field, radio frequency waves, and a computer to evaluate blood vessels and help identify abnormalities.
MRI
— Magnetic Resonance Imaging uses a magnetic field and radio waves to take pictures inside the body.
It is especially helpful to collect pictures of soft tissue such as organs and muscles that don't show up on x-ray examinations
PET scan
— Positron Emission Tomography may be used to evaluate organs and/or tissues for the presence of disease or other conditions.
PET may also be used to evaluate the function of organs, such as the heart or brain.
The most common use of PET is in the detection of cancer and the evaluation of cancer treatment.
CT scan
— Computed Tomography is used to identify disease or injury within various regions of the body.
For example, CT has become a useful screening tool for detecting possible tumors or lesions within the abdomen.
A CT scan of the heart may be ordered when various types of heart disease or abnormalities are suspected.
🎞️: World of Medics
ProtoSnap, developed by Cornell and Tel Aviv universities, aligns prototype signs to photographed clay tablets to decode thousands of years of Mesopotamian writing.
Cornell University researchers report that scholars can now use artificial intelligence to “identify and copy over cuneiform characters from photos of tablets,” greatly easing the reading of these intricate scripts.
The new method, called ProtoSnap, effectively “snaps” a skeletal template of a cuneiform sign onto the image of a tablet, aligning the prototype to the strokes actually impressed in the clay.
By fitting each character’s prototype to its real-world variation, the system can produce an accurate copy of any sign and even reproduce entire tablets.
"Cuneiform, like Egyptian hieroglyphs, is one of the oldest known writing systems and contains over 1,000 unique symbols.
Its characters change shape dramatically across different eras, cultures and even individual scribes so that even the same character… looks different across time,” Cornell computer scientist Hadar Averbuch-Elor explains.
This extreme variability has long made automated reading of cuneiform a very challenging problem.
The ProtoSnap technique addresses this by using a generative AI model known as a diffusion model.
It compares each pixel of a photographed tablet character to a reference prototype sign, calculating deep-feature similarities.
Once the correspondences are found, the AI aligns the prototype skeleton to the tablet’s marking and “snaps” it into place so that the template matches the actual strokes.
In effect, the system corrects for differences in writing style or tablet wear by deforming the ideal prototype to fit the real inscription.
Crucially, the corrected (or “snapped”) character images can then train other AI tools.
The researchers used these aligned signs to train optical-character-recognition models that turn tablet photos into machine-readable text.
They found the models trained on ProtoSnap data performed much better than previous approaches at recognizing cuneiform signs, especially the rare ones or those with highly varied forms.
In practical terms, this means the AI can read and copy symbols that earlier methods often missed.
This advance could save scholars enormous amounts of time.
Traditionally, experts painstakingly hand-copy each cuneiform sign on a tablet.
The AI method can automate that process, freeing specialists to focus on interpretation.
It also enables large-scale comparisons of handwriting across time and place, something too laborious to do by hand.
As Tel Aviv University archaeologist Yoram Cohen says, the goal is to “increase the ancient sources available to us by tenfold,” allowing big-data analysis of how ancient societies lived – from their religion and economy to their laws and social life.
The research was led by Hadar Averbuch-Elor of Cornell Tech and carried out jointly with colleagues at Tel Aviv University.
Graduate student Rachel Mikulinsky, a co-first author, will present the work – titled “ProtoSnap: Prototype Alignment for Cuneiform Signs” – at the International Conference on Learning Representations (ICLR) in April.
In all, roughly 500,000 cuneiform tablets are stored in museums worldwide, but only a small fraction have ever been translated and published.
By giving AI a way to automatically interpret the vast trove of tablet images, the ProtoSnap method could unlock centuries of untapped knowledge about the ancient world.
17-year-old Addison Bethea was in 5-feet-deep water near Grassy Island in Florida when a shark suddenly bit her on the leg. As Addison struggled in the water, her brother Rhett Willingham leapt into action and dived into the water. Fighting to save his sister’s life, Rhett managed to free his sister’s leg from the shark’s grip, before dragging her to safety on his boat. Addison was then airlifted around 80 miles away to Tallahassee Memorial Hospital, where she underwent emergency surgery to amputate her leg. If it wasn’t for her brother’s quick-thinking and brave actions, Addison could have lost her life on that day too.
This is how the James Web telescope works
44 years ago, Mount St Helens Erupted
Rotating Moon from LRO