Wednesday, 13 January 2021

Have a look at Sofi - The Robotic Fish The next Revolution for Underwater Climatic Changes Handling

 Just wondering what magic it can make in underwater life. During pandemic we’ve seen coral reef affected majorly due to all pollutants in water.

Coral reef resilience key to support the underwater cities threatened by climate change

The world is figuring out how to move forward in the face of the COVID-19 pandemic by finding newer ways to support economic development, animal and human well being, and ecosystem integrity. As the priority in many parts of the world is to stay home, safe and healthy, work continues to address the ongoing crisis of nature loss which also threatens long-term health and prosperity. In fact, nature, now more than ever, is sending warning signals calling for our attention.

One such warning is from the Great Barrier Reef along Australia's northeast coast. In March 2020, the area suffered a third mass coral bleaching event in five years due to increasingly warmer temperatures recorded in February 2020.

There have also been reports that widespread bleaching in the first quarter of the year has also occurred in East Africa.

Coral reef scientists predict that bleaching events will be more frequent, more widespread and more severe. For instance, the 2017 Coral Bleaching Futures report by the United Nations Environment Programme (UNEP) predicted, “Increasingly coral bleaching would be among the greatest threats to coral reefs due to climate change.” Annual Severe Bleaching (ASB) is projected to occur within this century for 99 per cent of the world’s coral reefs. The average projected year of ASB is 2043, the report adds.

A report by the Intergovernmental Panel on Climate Change published in October 2018 warns that, even if we collectively manage to stabilize global surface temperatures to 1.5°C above pre-industrial levels, 70 to 90 per cent of coral reefs will be lost by the middle of this century.

Leticia Carvalho, Head of UNEP’s Marine and Fresh Water Branch says: “Scientists have been telling us for a while that coral bleaching events would become more frequent with anthropogenic climate change and warming oceans. Unfortunately, their worst predictions have come to pass. Mass coral bleaching events are like nature’s fire alarm, a stark reminder that climate change is happening and is already impacting our societies and global ecosystem”.

To date the Great Barrier Reef—a UNESCO World Heritage Site, known for its vast mosaic patterns of reefs, islands and coral cays visible from space—has suffered six mass bleaching events due to warmer than normal ocean temperatures: in 1998, 2002, 2006, 2016, 2017 and now 2020. A statement by the Great Barrier Reef Marine Park Authority shows climate change remains the single greatest challenge to the Reef. It is home to the world’s largest collection of coral reefs, with 400 species of coral, 1,500 species of fish and 4,000 types of mollusk.

Bleaching occurs when coral—tiny animals that secrete calcium carbonate for protection—become stressed by factors such as warm water or pollution. As a result of the stress, they expel the microscopic symbiotic algae called zooxanthellae, which reside within their tissues. The corals then turn ghostly white; they become ‘bleached’ (watch these coral bleaching explainer videos). The zooxanthellae are the primary food source for corals. If the algae do not return to the corals as soon as possible (or if temperatures get warmer), the corals can die as it happened in the Great Barrier Reef in 2016 and 2017.

Corals have been observed to glow in luminescent colours—blue, yellow and purple— to protect themselves, like a sunscreen, during extreme ocean heat waves before they die. The phenomenon has sparked the Glowing Gone Campaign in which UNEP has partnered with The Ocean Agency, among other leading ocean conservation organizations.

“It is important to remember bleached corals are not dead corals — on mildly or moderately bleached reefs there is a good chance most bleached corals will recover and survive this event. Equally, on severely bleached reefs, there will be higher mortality of corals,” adds the statement from the lead management agency for the Reef. However, some pockets of the Reef remain unaffected.

Coral reefs, like underwater cities, support a quarter of all marine life - potentially up to 1 million species. They provide at least half a billion people with food security and livelihoods. They also protect coastlines from increasing damage by buffering shorelines against waves, storms and floods, preventing loss of life, property damage and erosion.

“Understanding the different responses of coral reefs to bleaching events is critical for managing coral reefs in a changing climate. Coral reefs are naturally resilient ecosystems, and have been observed to recover well after mortality events if they are given the chance to and other stressors are reduced. This means better water quality, reduced pollution, and sustainable fishing,” says Ms Carvalho.

Supporting coral reefs to be resilient entails reducing coral reef vulnerability to climate change and other stressors.

The bleaching event comes at a time when this year’s World Environment Day, celebrated on June 5, is focused on biodiversity. The day calls for increased awareness and understanding of what biodiversity is and how it provides the vital services that sustain all life on earth. Coral have the highest biodiversity of any ecosystem on the planet—even more than a tropical rainforest.

Further, coral protection speaks to the UN Decade on Ecosystem Restoration (2021-2030) geared towards the restoration of degraded and destroyed ecosystems to fight the climate crisis and enhance food security, water supply and biodiversity.

The most probable answer to this thing is Sofi The Fish



Using a miniaturized acoustic communication module, a diver can direct the fish by sending commands such as speed, turning angle, and dynamic vertical diving. Experimental results gathered from tests along coral reefs in the Pacific Ocean show that the robotic fish can successfully navigate around aquatic life at depths ranging from 0 to 18 meters. Furthermore, our robotic fish exhibits a lifelike undulating tail motion enabled by a soft robotic actuator design that can potentially facilitate a more natural integration into the ocean environment.

We believe that our study advances beyond what is currently achievable using traditional thruster-based and tethered autonomous underwater vehicles, demonstrating methods that can be used in the future for studying the interactions of aquatic life and ocean dynamics.


What about SoFi?
It’s the most versatile bot of its kind, according to its creators at MIT’s computer science and AI lab CSAIL. It looks like a fish, moves like a fish, but it’s definitely a robot.

Versatile?
Yes. With its built-in cameras, scientists should be able to use SoFi to get close to the ocean’s inhabitants without spooking them — hopefully giving us greater insight into the lives of under-observed sea creatures.


Interesting!
Its housing is made from moulded and 3D printed plastics, meaning it is cheap and fast to fabricate. It has got a built-in buoyancy tank full of compressed air that means it can adjust its depth and linger at specific points in the water column.


Is SoFi fully automatic?
SoFi can swim semi-autonomously, and will keep going in a specific direction without oversight, but a handler can steer it left or right, up and down, using a modified SNES controller.

Tell me more.
SoFi’s propulsion system is its asset. This is a powerful hydraulic actuator that pumps water in and out of a pair of internal chambers, moving its tail fin back and forth. Not only is this quieter than using propellors like a submarine, but it’s also less dangerous.


What about its specifications? 
SoFi is 18.5 inches long from snout to tail and weighs about 3.5 pounds. It can dive 60 feet underwater and is powered by enough juice for about 40 minutes of exploration.

Go SoFi
Future versions of SoFi will also improve the fish's swimming and vision, and its creators say they are sketching out plans for SoFi 'swarms': schools of artificial fish set loose to monitor ocean health, perhaps recharged by solar-cell platforms floating on the water's surface.

Like miniature!






Monday, 11 January 2021

Untold story of David Hanson - A brief interview session with David Hanson

 A few thoughts on 

HUMANOID ROBOTS: RELATIONSHIPS, RIGHTS, RISKS AND RESPONSIBILITIES


David Hanson develops robots that are widely regarded as the world’s most human-like in appearance, in a lifelong quest to create true living, caring machines. To accomplish these goals, Hanson integrates figurative arts with cognitive science and robotics engineering, inventions novel skin materials, facial expression mechanisms, and collaborative developments in AI, within humanoid artworks like Sophia the robot, which can engage people in naturalistic face-to-face conversations and currently serve in AI research, education, therapy, and other uses. Hanson worked as a Walt Disney Imagineer, both a sculptor and a technical consultant in robotics, and later founded Hanson Robotics. As a researcher, Hanson published dozens of papers in materials science, artificial intelligence, cognitive science, and robotics journals — including SPIE, IEEE, the International Journal of Cognitive Science, IROS, AAAI, AI magazine and more. He wrote two books including “Humanizing Robots” and received several patents. Hanson was featured in the New York Times, Popular Science, Scientific American, WIRED, BBC and CNN. He also received earned awards from NASA, NSF, Tech Titans’ Innovator of the Year, RISD, Cooper Hewitt Design Triennial, and the co-received the 2005 AAAI first place prize for open interaction of an AI system. Hanson holds a Ph.D. in Interactive Arts and Technology from the University of Texas at Dallas, and a BFA in film Animation video from the Rhode Island School of Design.

Lets have a look at a brief talk with this amazing piece of humanoid Robot

Do you think Sophia Humanoid is a PR Stunt

Disadvantages of AI?

 We all are experiencing the benefits of AI, scientists are speculating the brighter future for AI. Though we all are fascinated by concepts like automated cars, personalized shopping, etc, are they any threats from this AI that we all are unaware of?

Let’s find out…

What is AI? It is a subset of computer science applications. What is the main objective of AI-based machines? It is to mimic human activities such that it can be used to augment their natural abilities or ease their lives. Some of the major AI-based software include Google Cloud Machine Learning Engine, TensorFlow, Azure Machine Learning Studio, Cortana, IBM Watson, etc.

However, when you look at the brighter side of a thing, you should acknowledge that there is a darker side to it. Similarly, despite several advantages that AI offers, it also has some disadvantages that we can’t ignore. So, let us look at some of the major disadvantages of AI implementation.

1. HIGH COST OF IMPLEMENTATION

Setting up AI-based machines, computers, etc. entails huge costs given the complexity of engineering that goes into building one. Further, the astronomical expense doesn’t stop there as repair and maintenance also run into thousands of dollars. Do you know how much it cost Apple to acquire its virtual assistant SIRI? The acquisition of the software cost somewhere around a whopping $200 million. Further, the high cost of AI implementation is evident from the fact that Amazon acquired Alexa for $26 million in 2013.

These AI-based software programs require frequent upgrades in order to cater to the requirements of the changing environment as the machine needs to become smarter by the day. In case the software suffers a severe breakdown, then the process of recovering lost codes and reinstalling the system can give you nightmares due to the huge time and cost involved.

2. CAN’T REPLACE HUMANS

It is beyond any doubt that machines perform much more efficiently as compared to a human being. But even then it is practically impossible to replace humans with AIs, at least in the near future, because you can’t build human intelligence in a machine as it is a gift of nature. So, no matter how smart a machine can become, it can never replace a human.

We might get terrified at the idea of being replaced by machines, but honestly, it is still a far-fetched notion. Machines are rational but don’t have any emotions or moral values. They lack the ability to bond with human beings which is a critical attribute needed to manage a team of humans.

Yes, it is true that they can store a lot of data but the procedure of retrieving information from them is quite a cumbersome process, which is way difficult compared to human intelligence.

3. DOESN’T IMPROVE WITH EXPERIENCE

One of the most amazing characteristics of human cognitive power is its ability to develop with age and experience. However, the same can’t be said about AIs as they are machines that can’t improve with experience, rather it starts to wear and tear with time.

You need to understand one thing that machines can’t alter their responses to changing environments. That is the basic premise on which AIs are built – repetitive nature of work where the input doesn’t change. So, whenever there is some change in the input, the AIs need to be re-assessed, re-trained and re-build.

Machines can’t judge what is right or what is wrong because they are incapable of understanding the concept of ethical or legal. They are programmed for certain situations and as such can’t take decisions in cases where they encounter an unfamiliar (not programmed for) situation.

4. LACKS CREATIVITY

As already mentioned above – AIs are not built for creative pieces of work. So, it should be crystal clear by now that creativity or imagination is not the forte of the AIs. Although they can help you in designing and creating something special, they still can’t compete with the human brain. Their creativity is limited to the creative ability of the person who programs and commands them.

Human brains are characterized by immense sensitivity and high emotional quotient. To put it simply, AIs can become skilled machines but they can never acquire the abilities of the human brain. The reason is that skills can be learned and mastered, but abilities come naturally and can only be honed.

5. RISK OF UNEMPLOYMENT

With rapid development being made in the field of AI, the question that plagues our intuitive brain is that – will AI replace humans? Honestly, I am not sure whether AIs will lead to higher unemployment or not. But AIs are likely to take over the majority of the repetitive tasks, which are largely binary in nature and involve minimum subjectivity.

According to a study conducted by McKinsey Global Institute, intelligent agents and robots could replace ~30% of the world’s current human labor by the year 2030. The study further states that “automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely”.

So, it can’t be ruled out that AIs will result in a less human intervention which may cause major disruption in the employment standards. Nowadays, most of the organization are implementing automation at some level in order to replace the minimum qualified individuals with machines that can do the same work with higher efficiency. It is further evident from the information provided by International Data Corp.which states that worldwide AI spending is expected to hit $35.8 Billion in 2019, which is then likely to more than double to $79.2 Billion by 2022.

CONCLUSION

In the above discussion, we have seen some of the major disadvantages of AI implementation. So, it can be concluded that like any other invention AI also comes with its own set of problems. However, it won’t be too optimistic to believe that all these problems will probably be fixed with time, including the issue of unemployment which can be solved with human upskilling.

Monday, 14 December 2020

Just wondering what if PTSD can be diagnosed by voice using AI

 


PTSD Diagnosed through Voice Analysis Using AI

It sound something crazy but actually I’ve got encountered same thing in real Time, if you’ve smart devices and AI can make a lot of magical turnaround for you.

Scientists in the United States have developed an artificial intelligence (AI) tool, or classifier, that can diagnose posttraumatic stress disorder (PTSD) in veterans by analyzing their voices. Tests showed that the new tool could distinguish between individuals who did or did not have PTSD, with 89% accuracy. With further refinement the tool could potentially be used in a clinical setting to remotely diagnose PTSD, a condition for which the New York University (NYU) School of Medicine-led team acknowledges there is currently no objective test.

“Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future,” said senior study author Charles R. Marmar, MD, the Lucius N. Littauer professor and chair of the department of psychiatry at NYU School of Medicine.

Marmar’s team and colleagues at the Steven and Alexandra Cohen Veterans Center for the Study of Post-Traumatic Stress and Traumatic Brain Injury, and SRI International, describe the AI classifier in Depression and Anxiety, in a paper titled, “Speech-Based Markers for Posttraumatic Stress Disorder in U.S. Veterans.”

More than 70% of adults worldwide experience a traumatic event at some point in their lives, and in some countries up to 12% may suffer from PTSD. The condition leads to severe distress when faced with reminders of the traumatic event. PTSD may also be associated with relationship problems, lower academic achievement, substance abuse, and unemployment, the authors wrote.

Diagnosing PTSD remains “challenging,” they continued, and is commonly based on self-reported assessments or interviews with clinicians. The current gold standard for PTSD diagnosis, the Clinician Administered PTSD Scale (CAPS), is based on a lengthy, structured clinical interview, but it is not ideal, and some patients find it too distressing to discuss past traumatic events and their symptoms. “For these reasons, there is an imperative to develop objective measures for screening and diagnosing psychiatric disorders,” the authors wrote.

One sphere of research is looking for biological markers of PTSD, such as changes to neural structures and function, along with genomic, and immune function markers. While progress is being made, there are drawbacks and, as the authors commented, “… problems in accuracy, cost, and patient burden preclude routine use in clinical practice.”

Speech-based techniques offer an alternative and “attractive” potential approach for diagnosing different psychiatric disorders, they continued. Speech can be measured at low cost, remotely, and non-invasively. “Clinicians have long observed that individuals suffering from psychiatric disorders display changes in speech and routinely use impressions of voice quality as an elemental status examination …” Features of how we speak, such as whether the voice sounds “pressured” may indicate conditions such as bipolar disorder, while characteristics including “monotone,” “lifeless,” and “metallic,” may indicate depression.

While recent techniques developed for automating speech analysis have demonstrated encouraging specificity and sensitivity for some indications, there is relatively little known about changes in speech associated with PTSD. “Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively,” commented lead author Adam Brown, PhD, adjunct assistant professor in the department of psychiatry at NYU School of Medicine.

For their study, the Marmar team applied the random forest statistical/machine learning approach, which can learn how to classify individuals, based on example, to CAPS interviews with U.S. military veterans. The team recorded standard CAPS interviews with 53 Iraq and Afghanistan veterans with military service-related PTSD, and another 78 veterans without PTSD. Individuals who had potentially confounding diagnoses, such as history of substance abuse, psychiatric disorders including bipolar disorder, major depressive disorder (MDD), non-PTSD-related depression, recent exposure to traumatic events, or suicidal ideation or attempts, had been excluded. The recordings were then fed into voice software from SRI International, which generated a total of 40,526 different speech-based features, captured in short speech segments.

The AI program analyzed the results to search for patterns of specific voice features that were linked with PTSD. These included less clear speech and a lifeless, metallic tone, which are characteristics that have been reported anecdotally as useful for PTSD diagnosis. “The software analyzes words—in combination with frequency, rhythm, tone, and articulatory characteristics of speech—to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health, and communication quality,” explained Dimitra Vergyri, director of SRI International’s Speech Technology and Research (STAR) Laboratory. The results suggested that the probability of PTSD was higher for markers including “slower, more monotonous speech, less change in tonality, and less activation,” the team wrote.

When used to analyze speech the resulting classifier demonstrated an overall correct classification rate of 89.1%. While the authors acknowledge that their study had a number of limitations, they suggest that the panel of voice markers could be further developed into a clinical tool. “ … we believe that our panel of voice markers represents a rich, multidimensional set of features which with further validation holds promise for developing an objective, low cost, non-invasive, and, given the ubiquity of smart phones, widely accessible tool for assessing PTSD in veteran, military, and civilian contexts,” they wrote.

“The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™,” Vergyri noted. “The technology has been involved in a series of industry applications visible in startups like Oto, Ambit, and Decoded Health.”

The team plans to continue to train the AI voice tool using additional data, with the ultimate goal of generating a classifier that can be used in a clinical setting.

Friday, 11 December 2020

Smooth interaction between humans and robots

 According to a new study by Tampere University in Finland, making eye contact with a robot may have the same effect on people as eye contact with another person. The results predict that interaction between humans and humanoid robots will be surprisingly smooth.

With the rapid progress in robotics, it is anticipated that people will increasingly interact with so called social robots in the future. Despite the artificiality of robots, people seem to react to them socially and ascribe humane attributes to them. For instance, people may perceive different qualities -- such as knowledgeability, sociability, and likeability -- in robots based on how they look and/or behave.

Previous surveys have been able to shed light on people's perceptions of social robots and their characteristics, but the very central question of what kind of automatic reactions social robots evoke in us humans has remained unanswered. Does interacting with a robot cause similar reactions as interacting with another human?

Researchers at Tampere University investigated the matter by studying the physiological reactions that eye contact with a social robot evokes. Eye contact was chosen as the topic of the study for two major reasons. First, previous results have shown that certain emotional and attention-related physiological responses are stronger when people see the gaze of another person directed to them compared to seeing their averted gaze. Second, directing the gaze either towards or away from another person is a type of behaviour related to normal interaction that even current social robots are quite naturally capable of.

In the study, the research participants were face to face with another person or a humanoid robot. The person and the robot looked either directly at the participant and made eye contact or averted their gaze. At the same time, the participants' skin conductance, which reflects the activity of the autonomous nervous systems, the electrical activity of the cheek muscle reflecting positive affective reactions, and heart rate deceleration, which indicates the orienting of attention, were measured.

The results showed that all the above-mentioned physiological reactions were stronger in the case of eye contact compared to averted gaze when shared with both another person and a humanoid robot. Eye contact with the robot and another human focused the participants' attention, raised their level of arousal and elicited a positive emotional response.

"Our results indicate that the non-linguistic, interaction-regulating cues of social robots can affect humans in the same way as similar cues presented by other people. Interestingly, we respond to signals that have evolved over the course of evolution to regulate human interaction even when these signals are transmitted by robots. Such evidence allows us to anticipate that as robot technology develops, our interaction with the social robots of the future may be surprisingly seamless," says doctoral researcher Helena Kiilavuori.

"The results were quite astonishing for us, too, because our previous results have shown that eye contact only elicits the reactions we perceived in this study when the participants know that another person is actually seeing them. For example, in a video conference, eye contact with the person on the screen does not cause these reactions if the participant knows that his or her own camera is off, and the other person is unable to see him or her. The fact that eye contact with a robot produces such reactions indicates that even though we know the robot is a lifeless machine, we treat it instinctively as if it could see us. As if it had a mind which looked at us," says Professor of Psychology Jari Hietanen, director of the project.

Credit : Tampere University


Journal Reference:

  1. Helena Kiilavuori, Veikko Sariola, Mikko J. Peltola, Jari K. Hietanen. Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot. Biological Psychology,

Heart Disease can be cured using Artificial Intelligence

 According to a new study published in the European Heart journal a selfie’s could be used to detect heart diseases. This study shows by using a person photograph one can detect coronary artery disease by using a deep learning computer algorithm. This algorithm needs to be developed further by testing it on a larger number of people belonging to different ethnicities. This technique has tools that can detect possible heart disease in General population or people who are at the risk of  developing heart disease in the near future. This is the very first step toward the development of tools whose principles are based on deep machine learning and  they could be helpful in assessing the risk of heart diseases either in clinics or in patients who want to perform their own screening. This research was led by Professor Zhe Zheng Who is the main research leading person and vice chancellor of the National Centre For cardiovascular Disease. He says that his ultimate goal is to develop self reported applications in communities who are at higher risk of developing heart problems. However the algorithm needs to be refined further by testing it on different communities, populations and ethnicities. Certain facial features are known to be associated with risk of heart disease like greying and thinning of hair, wrinkle and small yellow deposits of cholesterol under the eyes usually around eyelids and arcus cornea. The research was carried out in different patients admitted in different hospitals in China and they were undergoing imaging procedures to investigate coronary artery disease. After that trained nurses took four facial photos with the help of a digital camera. Patients were also interviewed about their diets and lifestyles. Radiologist also reviewed patients’ angiograms to train and validate the algorithm. The test was successful and outperformed the existing method of predicting heart disease and its risks in the population. Facial recognition alone is successful in predicting results at large scale and can be useful in underfunded areas and with low medical facility areas. The test has low sensitivity so before its implementation on a large scale it should be tested on a larger population. Fear of misuse of genetic information in AI limited its use in medical healthcare and sciences.

Further not only these heart diseases can be detected but can be cured also. The artificial intelligence can use the concept of machine learning and frequencies to cure such kind of diseases if properly handled. But it all take a little more efforts at either end.

Have a look at Sofi - The Robotic Fish The next Revolution for Underwater Climatic Changes Handling

 Just wondering what magic it can make in underwater life. During pandemic we’ve seen coral reef affected majorly due to all pollutants in w...