Wednesday 31 May 2023

Artificial intelligence in neurology: promising research and proven applications

 Artificial intelligence (AI) is redrawing the healthcare landscape and neurology is no exception to this growing trend. Not without reason, because it can offer many benefits, both in the realm of neurological research, as well as in diagnosis and therapeutic interventions. But what is AI and how can neurologists use it to their advantage? How can AI-based algorithms help neurologists and their patients? In this article, we will explore five areas of neurology where AI is currently playing a role in improving healthcare.

Let’s start with some AI basics

AI covers the field of computer science that is focused on simulating intelligent human behavior and computational processes within the brain. Other terms, such as “machine learning” and “deep learning” are sometimes used as synonyms of AI. Yet, these are subfields of the complete field of artificial intelligence.

Artificial intelligence covers all programming systems that can perform tasks which usually require human intelligence. Machine learning and deep learning have the same capabilities, but use specific methods with machine learning being a subfield of AI and deep learning being a subfield of machine learning.

AI is probably the most talked about technology, but AI means different things to different people. Usually, people think of algorithms that can learn patterns from large datasets in order to be able to recognize these patterns later in new data. These can be both machine learning and deep learning algorithms. If you are curious to learn more about how these algorithms work, check out our AI introduction.

AI in neurology: oncology

The field of neuro-oncology knows many challenges for which AI can offer support. This can be seen in the example of brain tumor assessment and diagnosis. Researchers from Heidelberg University Hospital and the German Cancer Research Center have trained a machine learning algorithm using approximately 500 magnetic resonance imaging (MRI) scans of patients suffering from brain tumors. Using volumetric tumor segmentation as a ground truth, the resulting algorithm was able to detect and localize brain tumors automatically on the MRI scans.1 Such techniques can be of great value in, for instance, accurate diagnoses, and can also assist in tracking tumor therapy response in a repeatable and objective way.

Another application of AI in neuro-oncology is outcome prediction. Emblem et al. developed a machine learning algorithm (a support vector machineAI in neuro-oncology: data with ground truth are used for algorithm training. After training, the algorithm will be able to provide an output (answer a specific question, such as “does this brain MRI show a tumor and if so where?”) based on new data.

AI in neurology: neurodegeneration

AI has been helpful in the field of neurodegeneration which includes Parkinson’s, Alzheimer’s, and Amyotrophic Lateral Sclerosis (ALS). A nice overview of the application of AI kinematic data for the diagnosis and assessment of Parkinson’s disease was written by Belić et al. in 2019. Several (types of) algorithms were compared for different applications (diagnosis vs assessment) along different body sites. The authors discussed a wide range of methods and corresponding results, which only covered a limited part of all possible applications of AI in the area of Parkinson’s disease, as it did not include the evaluation of MRI data, speech recordings, or AI-support in a search for improved medication, just to name a few.3

Similar research around AI and Alzheimer’s disease has been performed in the past. AI has been applied to speech recordings4, cognitive tests scores5, and, perhaps most extensively, to neuroimaging, as it offers a wide range of AI-based image analysis options. For example, through analyses of brain MRIs. AI algorithms can automatically measure biomarkers of Alzheimer’s disease, including the rate of brain atrophy.6 If large datasets are available, AI is able to compare a patient’s MRI results to a normative value, and thus provide information on whether a patient’s biomarker is within a normal range or deserves extra attention.7 ) using MRI-based blood volume distribution data to predict preoperative glioma survival. Results have shown that the algorithm was able to estimate survival between 6 months and 3 years, with even higher accuracy than experts in aggressive glioma cases.2Different approaches in AI can also be deployed to analyze data other than medical images. Bakkar et al used sophisticated AI-based text search to sift through published literature on RNA-binding proteins that were previously linked to ALS. The researchers determined semantic features typical for ALS-related proteins. Subsequently, this information was used to scan other literature for proteins that are described using the same semantic features. Through this effort, the researchers singled out five RNA-binding proteins previously unlinked to ALS (for those interested in the details: specifically, the proteins hnRNPU, Syncrip, RBMS3, Caprin-1, and NUPL2 all showed significant alterations in ALS patients compared to controls).8

AI in neurology: neurovascular

Many applications of AI in the field of neurovascular disorders have been researched and even brought to market as software packages over the past few years. For instance, AI can aid the diagnosis of stroke in various ways. AI-based assessments of CT scans of hemorrhagic patients allow, for example, the automated lesion segmentation of hemorrhagic infarcts9 or the automated detection and quantification of hemorrhagic expansion.10 Additionally, AI has shown the ability to detect early warning signs of ischemia on CT images or to measure the extent of early ischemic changes by automated determination of the ASPECT score.11 One last example of AI’s value in stroke diagnosis is the measurement of the time of ischemic stroke onset. These are just a few examples, but many more are available, so in case of curiosity, a quick Pubmed search might give you more ideas.

Outside the AI application in the neurovascular imaging analysis domain, equal success can be expected in other parts of the neurovascular field. For example, Labovitz and colleagues, from the Montefiore Medical Center and AiCure located in New York City, demonstrated that AI can aid in patient monitoring, particularly in regard to medication. In Labovitz et al.’s study, the AI platform was installed on the patient’s mobile device where the AI algorithms were able to visually identify the patient, the medication, and the confirmed ingestion of that medication. This study also found that patients receiving this real-time monitoring had increased adherence to their prescription regimen.12 Thus, deploying a patients’ smartphone by installing AI-based apps that can monitor one’s behavior and treatment adherence can help improve therapeutic outcomes. This may seem like quite a simple and unadvanced example, but one can imagine that the difference it may be able to make in people’s lives could be substantial.

AI in neurology: traumatic brain injury

Applying AI to traumatic brain injury (TBI) has also been fruitfully researched over the past years. The main goal of intensive care in case of TBI is to mitigate the impact of secondary brain injury following the first insult by controlling factors such as intracranial and cerebral perfusion pressure. In 2019, researchers from Finland published a study about the use of AI in cases of TBI. These researchers developed an algorithm that could predict 30-day mortality for TBI patients. This algorithm was trained only on a few main variables (intracranial pressure, cerebral perfusion pressure, and mean arterial pressure). A secondary algorithm that utilized the motor and eye movement components of the Glasgow Coma Scale was also trained. Remarkably, with these few variables, the AI was able to accurately discriminate between survivors and non-survivors over 80% of the time.13AI in traumatic brain injury: Raj et al. trained two algorithms which, combined, can predict 30-day mortality with 80 percent accuracy in patients that suffered from TBI.

AI in neurology: spinal cord injury

Finally, applications of AI in neurology can be seen in cases of spinal cord injury. A study published in the journal Nature by researchers from the Battelle Memorial Institute and Ohio State University has recently shown that intracortically recorded signals can be linked in real time to muscle activation to restore movement in a paralyzed human. Researchers used a machine learning algorithm to decode the neuronal activity and control activation of forearm muscles through a custom-built electrical stimulation system.14 This enabled a 24-year-old man who had previously lost function in his hands and arms due to spinal cord injury to once again grasp, manipulate, and release objects. This represents not only an important contribution to neuro-prosthetic technology but also has important and direct implications for people living with paralysis worldwide.

Using AI in neurology in the clinic

We have outlined several ways in which AI has shown promise in the field of neurology. Many algorithms still need to go through the important translation process from a research tool to clinically approved software, while some applications have already obtained a nod from the regulatory bodies for use in the clinic. All in all, AI in neurology proves to be a promising field of research, and, if you are interested in bringing artificial intelligence software to your neurology practice, there are quite a few options available on the market. Just make sure to select software that has been approved for clinical use.

Bibliography

  1. Kickingereder, P. et al. Articles Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks : a multicentre , retrospective study. 1–13 doi:10.1016/S1470-2045(19)30098-1
  2. Emblem, K. E. P. et al. Machine Model for Preoperative. Radiology 275, (2015).
  3. Belić, M. et al. Artificial intelligence for assisting diagnostics and assessment of Parkinson’ s disease — A review. 184, (2019).
  4. Chien, Y., Hong, S., Cheah, W., Yao, L. & Chang, Y. An Automatic Assessment System for Alzheimer’ s Disease Based on Speech Using Feature Sequence Generator and Recurrent Neural Network. 1–10 (2019). doi:10.1038/s41598-019-56020-x
  5. Hughes, O. Using AI assessment to tackle dementia in ultra-early stages. (2019). Available at: https://www.digitalhealth.net/2019/09/using-ai-assessment-tackle-dementia-ultra-early-stages/.
  6. Blamire, A. M. Progress in Nuclear Magnetic Resonance Spectroscopy MR approaches in neurodegenerative disorders. Prog. Nucl. Magn. Reson. Spectrosc. 108, 1–16 (2018).
  7. Vinke, E. J. et al. Neurobiology of Aging Normative brain volumetry derived from different reference populations: impact on single-subject diagnostic assessment in dementia. 84, (2019).
  8. Bakkar, N. et al. Artificial intelligence in neurodegenerative disease research: use of IBM Watson to identify additional RNA ‑ binding proteins altered in amyotrophic lateral sclerosis. Acta Neuropathol. 135, 227–247 (2018).
  9. Scherer, M. et al. Development and Validation of an Automatic Segmentation Algorithm for Quantification of Intracerebral Hemorrhage. 1–8 (2016). doi:10.1161/STROKEAHA.116.013779
  10. Masaki Nagamine, Paul Kohanteb, Wengui Yu, Michelle Bardis, Te-Chang Wu, Lydia Su, Daniel Chow, Peter Chang, Jeon Chen, J. S. Abstract WP395: Detection of Hemorrhagic Expansion With AI. Stroke (2020).
  11. Takahashi, N. et al. Computer-aided detection scheme for identification of hypoattenuation of acute stroke in unenhanced CT. Radiol. Phys. Technol. 5, 98–104 (2012).
  12. Labovitz, D. L., Shafner, L., Gil, M. R., Virmani, D. & Hanina, A. in Patients on Anticoagulation Therapy. 48, 1416–1419 (2018).
  13. Raj, R., Luostarinen, T., Pursiainen, E., Posti, J. P. & Takala, R. S. K. Machine learning-based dynamic mortality prediction after traumatic brain injury. 1–13 (2019). doi:10.1038/s41598-019-53889-6
  14. Bouton, C. E. et al. in a human with quadriplegia. Nature 1–13 (2016). doi:10.1038/nature17435


Ding Dong Merrily on AI: The British Neuroscience Association’s Christmas Symposium Explores the Future of Neuroscience and AI

 

Moving past idiotic AI

Opening the day with his talk, Shake your Foundations: the future of neuroscience in a world where AI is less rubbish, Prof. Christopher Summerfield, from the University of Oxford, looked at the idiotic, ludic and pragmatic stages of AI. We are moving from the idiotic phase, where virtual assistants are usually unreliable and AI-controlled cars crash into random objects they fail to notice, to the ludic phase, where some AI tools are actually quite handy. Summerfield highlighted a program called DALL-E, an AI that converts text prompts into images, and a language generator called gopher that can answer complicated ethical questions with eerily natural responses.


What could these advances in AI mean for neuroscience? Summerfield suggested that they invite researchers to consider the limits of current neuroscience practice that could be enhanced by AI in the future.


Integration of neuroscience subfields could be enabled by AI, said Summerfield. Currently, he said “People who study language don’t care about vision. People who study vision don’t care about memory.” AI systems don’t work properly if only one distinct subfield is considered and Summerfield suggested that, as we learn more about how to create a more complete AI, similar advances will be seen in our study of the biological brain.


Another element of AI that could drag neuroscience into the future is the level of grounding required for it to succeed. Currently, AI models are provided with contextual training data before they can learn associations, whereas the human brain learns from scratch. What makes it possible for a volunteer in a psychologist’s experiment to be told to do something, and then just do it? To create more natural AIs, this is a problem that neuroscience will have to solve in the biological brain first.

Better decisions in healthcare using AI

The University of Oxford’s Prof. Mihaela van der Schaar looked at how we can use machine learning to empower human learning in her talk, Quantitative Epistemology: a new human-machine partnership. Van der Schaar’s talks discussed practical applications of machine learning in healthcare by teaching clinicians through a process called meta-learning. This is where, said van der Schaar, “learners become aware of and increasingly in control of habits of perception, inquiry, learning and growth.”


This approach provides a potential look at how AI might supplement the future of healthcare, by advising clinicians on how they make decisions and how to avoid potential error when undertaking certain practices. Van der Schaar gave an insight into how AI models can be set up to make these continuous improvements. In healthcare, which, at least in the UK, is slow to adopt new technology, van der Schaar’s talk offered a tantalizing glimpse of what a truly digital approach to healthcare could achieve.


Dovetailing nicely from van der Schaar’s talk was Imperial College London professor Aldo Faisal’s presentation, entitled AI and Neuroscience – the Virtuous Cycle. Faisal looked at systems where humans and AI interact and how they can be classified. Whereas in van der Schaar’s clinical decision support systems, humans remain responsible for the final decision and AIs merely advise, in an AI-augmented prosthetic, for example, the roles are reversed. A user can suggest a course of action, such as “pick up this glass”, by sending nerve impulses and the AI can then find a response that addresses this suggestion, by, for example, directing a prosthetic hand to move in a certain way. Faisal then went into detail on how these paradigms can inform real-world learning tasks, such as motion-tracked subjects learning to play pool.


One fascinating study involved a balance board task, where a human subject could tilt the board in one axis, while an AI controlled another, meaning that the two had to collaborate to succeed. After time, the strategies learned by the AI could be “copied” between certain subjects, suggesting the human learning component was similar. But for other subjects, this wasn’t possible.


Faisal suggested this hinted at complexities in how different individuals learn that could inform behavioral neuroscience, AI systems and future devices, like neuroprostheses, where the two must play nicely together.


The afternoon’s session featured presentations that touched on the complexities of the human and animal brain. The University of Sheffield’s Professor Eleni Vasilaki explained how mushroom bodies, regions of the fly brain that play roles in learning and memory, can provide insight into sparse reservoir computing. Thomas Nowotny, professor of informatics at the University of Sussex, reviewed a process called asynchrony, where neurons activate at slightly different times in response to certain stimuli. Nowotny explained how this enables relatively simple systems like the bee brain to perform incredible feats of communication and navigation using only a few thousand neurons.

Do AIs have minds? 

Wrapping up the day’s presentations was a lecture that showed an uncanny future for social AIs, delivered by the Henry Shevlin, a senior researcher at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.


Shevlin reviewed the theory of mind, which enables us to understand what other people might be thinking by, in effect modeling their thoughts and emotions. Do AIs have minds in the same way that we do? Shevlin reviewed a series of AI that have been out in the world, acting as humans, here in 2021.


One such AI, OpenAIs language model, GPT-3, spent a week posting on internet forum site Reddit, chatting with human Redditors and racking up hundreds of comments. Chatbots like Replika that personalize themselves to individual users, creating pseudo-relationships that feel as real as human connections (at least to some users). But current systems, said Shevlin, are excellent at fooling humans, but have no “mental” depth and are, in effect, extremely proficient versions of the predictive text systems our phones use.


While the rapid advance of some of these systems might feel dizzying or unsettling, AI and neuroscience are likely to be wedded together in future research. So much can be learned from pairing these fields and true advances will be gained not from retreating from complex AI theories but by embracing them. At the end of Summerfield’s talk, he summed up the idea that AIs are “black boxes” that we don’t fully understand as “lazy”. If we treat deep networks and other AIs systems as neurobiological theories instead, the next decade could see unprecedented advances for both neuroscience and AI.

Alto Neuroscience uses AI to create biomarkers for personalised mental health drugs

 Alto Neuroscience is a clinical-stage biopharmaceutical startup that uses its artificial intelligence (AI)-enabled platform to measure brain biomarkers, including electroencephalogram (EEG) activity and behavioural patterns, wearable data, genetics and other factors, to drive targeted drug development in mental health.

Using GlobalData’s Drugs Database and a series of searches relating to AI, more than 490 unique drugs from over 140 different companies have been detected that have been discovered using AI-based technologies. These companies include specialist AI vendors that provide services to the pharma industry, as well as small biotechs that have developed drug pipelines using in-house AI technology, such as Alto Neuroscience. Figure 1 shows the top companies by number of drugs developed using AI-based technologies, with AI vendor Recursion Pharmaceuticals leading with 82 drugs in various stages of early development, with many of these candidates inactive.

However, as shown in Figure 2, Alto Neuroscience has the most drugs in clinical development, including three in Phase II studies for major depressive disorder (MDD) and post-traumatic stress disorder (PTSD) (ALTO-100, ALTO-202 and ALTO-300), and eight in Phase I for unspecified psychiatric disorders.In December 2021, Cerebral, a mental health startup, and Alto Neuroscience announced a partnership to launch a decentralised clinical study in precision psychiatry aiming to boost drug development and treatments for patients with mental disorders. The companies will conduct Phase II studies to evaluate Alto Neuroscience’s drug candidates for psychiatric conditions and enrol around 300 participants from Cerebral’s member network. Participants will then undergo in-home evaluations measuring their brain activity, sleep, activity patterns and genetics, in addition to clinical outcomes such as depression and PTSD. Using Alto Neuroscience’s analytical approach to predicting patient outcomes, it will assess whether certain biomarkers are the best way to identify patients most likely to benefit from a given drug candidate. The Phase II study is expected to complete in 2022. This collaboration will tackle the disruption challenges and traditional treatment options that the psychiatric sector has faced for decades and aims to provide patients with easier access to care.

OpenAI for Laravel: Using chatGPT/GPT 3/3.5 with PHP and Laravel

 OpenAI PHP for Laravel is a powerful tool for developers looking to use the OpenAI chatGPT / GPT-3 API within their Laravel projects. This library allows you to easily interact with the OpenAI API and use its various features within your Laravel application.

Installing OpenAI for Laravel

To install OpenAI PHP for Laravel, you will first need to make sure you have PHP 8.1 or higher installed on your system. Once you have the required version of PHP, you can use the Composer package manager to install the OpenAI PHP for Laravel library by running the following command:

composer require openai-php/laravel

After installing the library, you will need to publish the configuration file by running the following artisan command:

php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"

This will create a config/openai.php configuration file in your project that you can modify to your needs using environment variables. You will need to provide your OpenAI API key in this configuration file in order to use the library.

Using OpenAI for Laravel

Once you have installed and configured OpenAI PHP for Laravel, you can use the OpenAI facade to access the various features of the OpenAI chatGPT / GPT-3 API within your Laravel application. For example, you can use the completions method to generate completions for a given prompt, as shown in the following example:

use OpenAI\Laravel\Facades\OpenAI;

$result = OpenAI::completions()->create([ 'model' => 'text-davinci-003', 'prompt' => 'PHP is', ]);

echo $result['choices'][0]['text']; // an open-source, widely-used, server-side scripting language.

Conclusion

OpenAI PHP for Laravel is a powerful tool for developers looking to use the OpenAI API within their Laravel projects. With its easy installation and integration, it makes it simple to use the various features of the OpenAI API within your Laravel application.

Tuesday 30 May 2023

Neuroscience: The next AI frontier

 The demand for neurology in the U.S. and across the globe is exceptionally high, with significant discrepancies between the growing need for neurological services and the actual supply of neurologists: According to expert estimates, by 2025 demand for neurology will have grown by approximately 15-20%. But forecasters project that the supply of neurologists will only grow 7% by mid-decade.

While the recruitment and training of more doctors is one solution, embracing the latest technologies and advancements in artificial intelligence offers a more likely and more effective long-term solution.

Fittingly, our understanding of neuroscience has played a pivotal role in advances in AI, and vice versa. From developments in object recognition technologies as seen in self-driving vehicles, to the faster, more accurate detection of breast cancer through Google AI, the study of how the human mind works has proven a rich source of inspiration for many algorithmic approaches and developments in AI and Machine Learning (ML).

So too have advancements in AI helped further our understanding of the human mind, leading to accelerations in neuroscience development. Second only to neoplasms, the nervous system leads in disease types considered in the AI literature. While initially limited mainly to research applications, the implementation of AI into the neurologic clinical healthcare arena has increased steadily with numerous examples of success in neurodiagnostic testing. This has occurred in areas such as neuroimaging, which has empowered clinicians to make earlier and more accurate diagnoses in stroke victims, developing new treatments such as in Autism, and aiding in prognostication with examples in epilepsy.

Clinicians readily acknowledge the mutual benefit of AI in medicine: 89% of healthcare professionals have expressed that AI has enhanced their work and the systems they use, according to research by KPMG. This enhancement of work is the product of AI as a resource rather than an attempt to replace healthcare professionals. Indeed, when working in collaboration with AI, clinicians significantly outperform those who work independently. This is also true when we remove the clinician from the process, as AI is far more effective when it’s guided by the expertise and discernment of a human professional.

Reaping the benefits of AI in neurology

The benefits of partnering physicians and AI tools in healthcare will be far reaching, with the most notable, overarching and long-term benefit being the ability to utilize AI to offload some of the burden of neurological diagnosis and treatment to our colleagues in primary or emergency care by enhancing their abilities. By the end of 2021, the healthcare AI market is expected to exceed $6 billion as more and more companies invest in and develop AI to do just this.

What will this mean for the future of neurology? RapidAI is an exemplary company on the front lines of neurological innovation. RapidAI’s AI and data-driven technology is empowering clinicians to make faster, more accurate diagnostic and treatment decisions for stroke patients. The company has developed a neuroimaging stroke software platform powered by AI to provide faster analysis of data in treating stroke patients. Utilizing this platform combined with telemedicine, I personally have been able to improve my ability to provide pediatric stroke coverage while also enhancing emergent care for pediatric stroke victims who would have otherwise required transportation (typically by airlift) to our children’s hospital.

Advances in neuroscience have also blurred the boundaries between psychiatry and neurology. Neurologists’ ability to refer patients to psychiatrists will also face growing challenges in the years to come, with similar shortages of psychiatrists projected by 2025.

Psychiatric comorbidities, particularly depression, worsen during the course of major neurologic conditions. Population-based studies suggest that one in every three patients who develop stroke, epilepsy, migraines, or Parkinson’s disease will develop depression. Between 30% and 50% of patients with dementia suffer from depression, and between 27% and 54% of patients with multiple sclerosis (MS) have had an episode of major depressive disorder.

Pear Therapeutics offers another promising look at how medical professionals working alongside AI can make meaningful progress in treating major neurological illnesses. The company is conducting a clinical trial in which multiple sclerosis patients suffering depressive symptoms are treated with a digital therapeutic product, developed in collaboration with Novartis.

At Cognoa, we are harnessing AI to equip pediatricians with the tools they need to accurately diagnose the majority of ASD patients through a prescriptive software as a medical device. The AI we employ is able to analyze thousands of different features indicative of ASD as well as draw on troves of data to make a rapid diagnosis and ensure early intervention occurs. We’re part of a rich fabric of companies applying AI in a targeted way to address major challenges in neuroscience. Aural Analytics is another company utilizing AI to propel the neurology field forward, by analyzing speech patterns to detect subtle changes in brain health.

The benefits of AI are wide-reaching across the neurology field. In a recent analysis published in the Journal of Neurology, neurologist Urvish K. Patel and his colleagues found a wide range of benefits to AI for neurological patients. AI and ML can enable efficient diagnosis and treatment of epilepsy, for instance, and can even spot early warning signs of autonomic instability, helping prevent sudden unexpected death in epilepsy (SUDEP). Additionally, AI algorithms can classify and predict the progression of dementia, guiding more intelligent treatment decisions.

Looking toward the future

For all the promise AI has shown as a neurological tool, it is still early days for the development of AI-based medical solutions. Further progress will hinge on the development of additional massive, high-quality datasets on which to train algorithms. Initiatives such as the BRAIN initiative, the Human Brain Project, the Human Connectome Project, and the National Institute of Mental Health’s (NIMH) Research Domain Criteria (RDoC) initiative are helping to address this need. Because human clinicians will be partnering with AI, securing as much input from doctors as possible will be critical to building sustainable, reliable models that maximize accuracy and effectiveness.

The role of regulation and regulators will also be crucial to the speed and depth of AI usage when it comes to aiding the neurology field. As AI started to be incorporated into medical devices, the FDA recognized  that changes would need to be made to the regulatory process regarding the software as a medical device field and are adapting accordingly.

Last April, the FDA proposed a framework that describes the FDA’s foundation for a potential approach to premarket review for AI and ML driven software modifications.  More recently, they held a two-day public workshop on Evolving Role of Artificial Intelligence in Radiological Imaging – all examples of the FDA proactively supporting the possibilities for AI to improve healthcare. The vision of CDRH Chief Medical Officer and Director for Pediatrics and Special Populations Vasum Peiris, M.D., M.P.H. to create a new nationwide network that can support innovation in pediatric device development, dubbed System of Hospitals for Innovation in Pediatrics (SHIP) is another positive development.

This framework combined with new regulations will allow for AI based diagnostics and therapeutics to be utilized across these special populations, including women (non-pregnant and pregnant), pediatrics, and the elderly who have a proportionality higher prevalence of neurological conditions.

As more patients require neurological services in the years to come, AI offers a pathway to addressing the projected shortage in neurologists – acting as a digital colleague to human professionals. And by alerting neurologists to signs of trouble, enabling clinicians to triage cases, and bringing unprecedented precision and accuracy to treatment and diagnostics, AI promises to alleviate clinical bottlenecks and make neurologists more effective. That makes AI-driven neurology a true win-win.

Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research

 Neuroscience and artificial intelligence (AI) share a long history of collaboration. Advances in neuroscience, alongside huge leaps in computer processing power over the last few decades, have given rise to a new generation of in silico neural networks inspired by the architecture of the brain. These AI systems are now capable of many of the advanced perceptual and cognitive abilities of biological systems, including object recognition and decision making. Moreover, AI is now increasingly being employed as a tool for neuroscience research and is transforming our understanding of brain functions. In particular, deep learning has been used to model how convolutional layers and recurrent connections in the brain’s cerebral cortex control important functions, including visual processing, memory, and motor control. Excitingly, the use of neuroscience-inspired AI also holds great promise for understanding how changes in brain networks result in psychopathologies, and could even be utilized in treatment regimes. Here we discuss recent advancements in four areas in which the relationship between neuroscience and AI has led to major advancements in the field; (1) AI models of working memory, (2) AI visual processing, (3) AI analysis of big neuroscience datasets, and (4) computational psychiatry.

Classically, our definition of intelligence has largely been based upon the capabilities of advanced biological entities, most notably humans. Accordingly, research into artificial intelligence (AI) has primarily focused on the creation of machines that can perceive, learn, and reason, with the overarching objective of creating an artificial general intelligence (AGI) system that can emulate human intelligence, so called Turing-powerful systems. Considering this aim, it is not surprising that scientists, mathematicians, and philosophers working on AI have taken inspiration from the mechanistic, structural, and functional properties of the brain. ∗ Correspondence to: Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, 3-2 Yamadaoka, Suita, Osaka 565- 0871, Japan. E-mail address: hikida@protein.osaka-u.ac.jp (T. Hikida). Since at least the 1950s, attempts have been made to artificially model the information processing mechanisms of neurons. This primarily began with the development of perceptrons (Rosenblatt, 1958), a highly reductionist model of neuronal signaling, in which an individual node receiving weighted inputs could produce a binary output if the summation of inputs reached a threshold. Coinciding with the emergence of the cognitive revolution in the 50s and 60s, and extending until at least the 90s, there was initially much pushback against the development of artificial neural networks within the AI and cognitive science communities (Fodor & Pylyshyn, 1988; Mandler, 2002; Minsky & Papert, 1969). However, by the late 1980s, the development of multilayer neural networks and the popularization of backpropagation had solved many of the limitations of early perceptrons, including their inability to solve non-linear classification problems such as learning a simple boolean XOR function (Rumelhart, Hinton, & Williams, 1986). Neural networks were now able to dynamically modify their own connections by calculating error functions of the network and communicating them back through https://doi.org/10.1016/j.neunet.2021.09.018 0893-6080/© 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/). T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 the constituent layers, giving rise to a new generation of AI capable of intelligent skills including image and speech recognition (Bengio, 1993; LeCun, Bengio, & Hinton, 2015; LeCun et al., 1989). To date, backpropagation is still commonly used to train deep neural networks (Lillicrap, Santoro, Marris, Akerman, & Hinton, 2020; Richards et al., 2019), and has been combined with reinforcement learning methods to create advanced learning systems capable of matching or outperforming humans in strategy-based games including Chess (Silver et al., 2018), Go (Silver et al., 2016, 2018), poker (Moravčík et al., 2017), and StarCraft II (Vinyals et al., 2019). Despite the neuroscience-inspired origins of AI, the biological plausibility of modern AI is questionable. Indeed, there is little evidence that backpropagation of error underlies the modification of synaptic connections between neurons (Crick, 1989; Grossberg, 1987); although recent theories have suggested that an approximation of backpropagation may exist in the brain (Lillicrap et al., 2020; Whittington & Bogacz, 2019). While creating brain-like systems is clearly not necessary to achieve all goals of AI, as evidenced by the above-described accomplishments, a major advantage of biologically plausible AI is its usefulness for understanding and modeling information processing in the brain. Additionally, brain mechanisms can be thought of as an evolutionarily-validated template for intelligence, honed over millions of years for adaptability, speed, and energy efficiency. As such, increased integration of brain-inspired mechanisms may help to further improve the capabilities and efficiency of AI. These ideas have led to continued interest in the creation of neuroscience-inspired AI, and have further strengthened the partnership between AI and neuroscience research fields. In the last decade, several biologically plausible alternatives to backpropagation have been suggested, including predictive coding (Bastos et al., 2012; Millidge, Tschantz, & Buckley, 2020), feedback alignment (Lillicrap, Cownden, Tweed, & Akerman, 2016), equilibrium propagation (Scellier & Bengio, 2017), Hebbian-like learning rules (Krotov & Hopfield, 2019; Miconi, 2017), and zerodivergence inference learning (Salvatori, Song, Lukasiewicz, Bogacz, & Xu, 2021). Similarly, other recent efforts to bridge the gap between artificial and biological neural networks have led to the development of spiking neural networks capable of approximating stochastic potential-based communication between neurons (Pfeiffer & Pfeil, 2018), as well as attention-like mechanisms including transformer architectures (Vaswani et al., 2017). The beneficial relationship between AI and neuroscience is reciprocal, and AI is now rapidly becoming an invaluable tool in neuroscience research. AI models designed to perform intelligence-based tasks are providing novel hypotheses for how the same processes are controlled within the brain. For example, work on distributional reinforcement learning in AI has recently resulted in the proposal of a new theory of dopaminergic signaling of probabilistic distributions (Dabney et al., 2020). Similarly, goal-driven deep learning models of visual processing have been used to estimate the organizational properties of the brain’s visual system and accurately predict patterns of neural activity (Yamins & DiCarlo, 2016). Additionally, advances in deep learning algorithms and the processing power of computers now allow for high-throughput analysis of large-scale datasets, including that of whole-brain imaging in animals and humans, expediting the progress of neuroscience research (Thomas, Heekeren, Müller, & Samek, 2019; Todorov et al., 2020; Zhu et al., 2019). Deep learning models trained to decode neural imaging data can create accurate predictions of decision-making, action selection, and behavior, helping us to understand the functional role of neural activity, a key goal of cognitive neuroscience (Batty et al., 2019; Musall, Urai, Sussillo, & Churchland, 2019). Excitingly, machine learning and deep learning approaches are also now being applied to the emerging field of computational psychiatry to simulate normal and dysfunctional brain states, as well as to identify aberrant patterns of brain activity that could be used as robust classifiers for brain disorders (Cho, Yim, Choi, Ko, & Lee, 2019; Durstewitz, Koppe, & Meyer-Lindenberg, 2019; Koppe, Meyer-Lindenberg, & Durstewitz, 2021; Zhou et al., 2020). In the last few years, several reviews have examined the long and complicated relationship between neuroscience and AI (see Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Hasson, Nastase, & Goldstein, 2020; Kriegeskorte & Douglas, 2018; Richards et al., 2019; Ullman, 2019). Here, we aim to give a brief introduction into how the interplay between neuroscience and AI fields has stimulated progress in both areas, focusing on four important themes taken from talks presented at a symposium entitled ‘‘AI for neuroscience and neuromorphic technologies’’ at the 2020 International Symposium on Artificial Intelligence and Brain Science; (1) AI models of working memory, (2) AI visual processing, (3) AI analysis of neuroscience data, and (4) computational psychiatry. Specifically, we focus on how recent neuroscience-inspired approaches are resulting in AI that is increasingly brain-like and is not only able to achieve human-like feats of intelligence, but is also capable of decoding neural activity and accurately predicting behavior and the brain’s mental contents. This includes spiking and recurrent neural network models of working memory inspired by the stochastic spiking properties of biological neurons and their sustained activation during memory retention, as well as neural network models of visual processing incorporating convolutional layers inspired by the architecture of the brain’s visual ventral stream. Additionally, we discuss how AI is becoming an increasingly powerful tool for neuroscientists and clinicians, acting as diagnostic and even therapeutic aids, as well as potentially informing us about brain mechanisms, including information processing and memory. 2. Neuroscience-inspired artificial working memory One of the major obstacles of creating brain-like AI systems has been the challenge of modeling working memory, an important component of intelligence. Today, most in silico systems utilize a form of working memory known as random-access memory (RAM) that acts as a cache for data required by the central processor and is separated from long-term memory storage in solid state or hard disk drives. However, this architecture differs considerably from the brain, where working and long-term memory appear to involve, at least partly, the same neural substrates, predominantly the neocortex (Baddeley, 2003; Blumenfeld & Ranganath, 2007; Rumelhart & McClelland, 1986; Shimamura, 1995) and hippocampus (Bird & Burgess, 2008; Eichenbaum, 2017). These findings suggest that within these regions, working memory is likely realized by specific brain mechanisms that allow for fast and short-term access to information. Working memory tasks performed in humans and non-human primates have indicated that elevated and persistent activity within cell assemblies of the prefrontal cortex, as well as other areas of the neocortex, hippocampus, and brainstem, may be critical for information retention within working memory (Boran et al., 2019; Christophel, Klink, Spitzer, Roelfsema, & Haynes, 2017; Fuster & Alexander, 1971; Goldman-Rakic, 1995; McFarland & Fuchs, 1992; Miller, Erickson, & Desimone, 1996; Watanabe & Niki, 1985). In response to these findings, several neural mechanisms have been proposed to account for this persistent activity (reviewed in Durstewitz, Seamans, & Sejnowski, 2000). These include recurrent excitatory connectivity between networks of neurons (Hopfield, 1982; O’Reilly, Braver, & Cohen, 1999), cellular bistability, where the intrinsic properties of neurons can produce a continuously spiking state (Lisman, Fellous, 604 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 & Wang, 1998; Marder, Abbott, Turrigiano, Liu, & Golowasch, 1996; O’Reilly et al., 1999), and synfire chains, where activity is maintained in synchronously firing feed-forward loops (Diesmann, Gewaltig, & Aertsen, 1999; Prut et al., 1998). Of these, the most widely researched have been models of persistent excitation in recurrently connected neural networks. These began with simple networks, such as recurrent attractor networks, where discrete working memories represent the activation of attractors, stable patterns of activity in networks of neurons reciprocally connected by strong synaptic weights formed by Hebbian learning (Amit, Bernacchia, & Yakovlev, 2003; Amit & Brunel, 1995; Durstewitz et al., 2000). Afferent input to these networks strong enough to reach a threshold will trigger recurrent excitation and induce a suprathreshold of excitation that persists even when the stimulus is removed, maintaining the stimulus in working memory. Subsequent and more complex computational models have demonstrated recurrent networks connecting the cortex, basal ganglia, and thalamus to be capable of working memory maintenance, and to be able to explain patterns of neural activity observed in neurophysiological studies of working memory (Beiser & Houk, 1998; Botvinick & Plaut, 2006; Hazy, Frank, & O’Reilly, 2007; O’Reilly et al., 1999; Zipser, 1991). Inspired by above-described biological and computational studies, artificial recurrent neural networks (RNNs) have been designed as a model of the recurrent connections between neurons within the brain’s cerebral cortex. These RNNs have since been reported to be capable of performing a wide variety of cognitive tasks requiring working memory (Botvinick & Plaut, 2006; Mante, Sussillo, Shenoy, & Newsome, 2013; Rajan, Harvey, & Tank, 2016; Song, Yang, & Wang, 2016; Sussillo & Abbott, 2009; Yang, Joglekar, Song, Newsome, & Wang, 2019). More recently, researchers have been working on a new generation of spiking recurrent neural networks (SRNN), aiming to recreate the stochastic spiking properties of biological circuits and demonstrating similar performance in cognitive tasks to the above-described continuous-rate RNN models (Kim, Li, & Sejnowski, 2019; Xue, Halassa, & Chen, 2021; Yin, Corradi, & Bohté, 2020). These spiking networks not only aim to achieve greater energy efficiency, but also provide improved biological plausibility, offering advantages for modeling and potentially informing how working memory may be controlled in the brain (Diehl, Zarrella, Cassidy, Pedroni, & Neftci, 2016; Han, Sengupta, & Roy, 2016; Pfeiffer & Pfeil, 2018; Taherkhani et al., 2020). Indeed, in a recent study, an SRNN trained on a working memory task was revealed to show remarkably similar temporal properties to single neurons in the primate PFC (Kim & Sejnowski, 2021). Further analysis of the model uncovered the existence of a disinhibitory microcircuit that acts as a critical component for long neuronal timescales that have previously been implicated in working memory maintenance in real and simulated networks (Chaudhuri, Knoblauch, Gariel, Kennedy, & Wang, 2015; Wasmuht, Spaak, Buschman, Miller, & Stokes, 2018). The authors speculate that recurrent networks with similar inhibitory microcircuits may be a common feature of cortical regions requiring short-term memory maintenance, suggesting an interesting avenue of study for neuroscientists researching working memory mechanisms in the brain. Finally, it is important to note that while there is clear biological evidence for spiking activity during memory retention periods in working memory tasks, the majority of studies reporting persistent activity during these periods calculated the averaged spiking activity across trials, potentially masking important intra-trial spiking dynamics (Lundqvist, Herman, & Miller, 2018). Interestingly, recent single-trial analyses of working memory tasks suggests that frontal cortex networks demonstrate sparse, transient coordinated bursts of spiking activity, rather than persistent activation (Bastos, Loonis, Kornblith, Lundqvist, & Miller, 2018; Lundqvist, Herman, Warden, Brincat, & Miller, 2018; Lundqvist et al., 2016). Such patterns of neural activity may be explained by models of transient spiking activity, such as the ‘‘synaptic attractor’’ model, where working memories are maintained by spike-induced Hebbian synaptic plasticity in between transient coordinated bursts of activity (Fiebig & Lansner, 2017; Huang & Wei, 2021; Lundqvist, Herman, & Miller, 2018; Mongillo, Barak, & Tsodyks, 2008; Sandberg, Tegnér, & Lansner, 2003). These models suggest that synaptic plasticity may allow working memory to be temporarily stored in an energy efficient manner that is also less susceptible to interference, while bursts of spiking may allow for fast reading of information when necessary (Huang & Wei, 2021; Lundqvist, Herman, & Miller, 2018). Further investigation of working memory in biological studies using single-trial analyses, as well as neuroscience-inspired AI models trained on working memory tasks, may help to elucidate precisely when and how these spiking and plasticity-based processes are utilized within the brain. Here we have discussed how neuroscience findings over the last few decades inspired the creation of computational models of working memory in humans and non-human primates. These studies subsequently informed the creation of artificial neural networks designed to model the organization and function of brain networks, including the inclusion of recurrent connections between neurons and the introduction of spiking properties. The relationship between neuroscience and AI research has now come full circle, with recent SRNN models potentially informing about brain mechanisms underlying working memory (Kim & Sejnowski, 2021). In the next section we continue this examination of the benefits of the partnership between neuroscience and AI research, discussing how brain architectures have inspired the design of artificial visual processing models and how brain imaging data has been used to decode and inform how visual processing is controlled within the brain. 3. Decoding the brain’s visual system The challenge of creating artificial systems capable of emulating biological visual processing is formidable. However, recent efforts to understand and reverse engineer the brain’s ventral visual stream, a series of interconnected cortical nuclei responsible for hierarchically processing and encoding of images into explicit neural representations, have shown great promise in the creation of robust AI systems capable of decoding and interpreting human visual processing, as well as performing complex visual intelligence skills including image recognition (Federer, Xu, Fyshe, & Zylberberg, 2020; Verschae & Ruiz-del-Solar, 2015), motion detection (Manchanda & Sharma, 2016; Wu, McGinnity, Maguire, Cai, & Valderrama-Gonzalez, 2008), and object tracking (Luo et al., 2020; Soleimanitaleb, Keyvanrad, & Jafari, 2019; Zhang et al., 2021). In an effort to understand and measure human visual perception, machine learning models, including support-vector networks, have been trained to decode stimulus-induced fMRI activity patterns in the human V1 cortical area, and were able to visually reconstruct the local contrast of presented and internal mental images (Kamitani & Tong, 2005; Miyawaki et al., 2008). Similarly, those trained to decode stimulus-induced activity in higher visual cortical areas were able to identify the semantic contents of dream imagery (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013). These findings indicate that the visual features of both perceived and mental images are represented in the same neural substrates (lower and higher visual areas for lowlevel perceptual and high-level semantic features, respectively), supporting previous evidence from human PET imaging studies revealing mental imagery to activate the primary visual cortex, an 605 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 area necessary for visual perception (Kosslyn et al., 1993; Kosslyn, Thompson, Klm, & Alpert, 1995). Additionally, these studies add to a growing literature demonstrating the utility of AI for decoding of brain imaging data for objective measurement of human visual experience (Kamitani & Tong, 2005; Nishimoto et al., 2011). Finally, beyond machine learning methods, the incorporation of a deep generator network (DGN) to a very deep convolutional neural network (CNN) image reconstruction method, allowing CNN hierarchical processing layers to be fully utilized in a manner similar to that of the human visual system, has recently been demonstrated to improve the quality of visual reconstructions of perceived or mental images compared with the same CNN without the DGN (Shen, Horikawa, Majima, & Kamitani, 2019). Interestingly, neural networks trained to perform visual tasks have often been reported to acquire similar representations to regions of the brain’s visual system required for the same tasks (Nonaka, Majima, Aoki, & Kamitani, 2020; Yamins & DiCarlo, 2016). CNNs incorporating hierarchical processing layers similar to that of the visual ventral stream and trained on image recognition tasks have been reported to be able to accurately predict neural responses in the inferior temporal (IT) cortex, the highest area of the ventral visual stream, of primates (Cadieu et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Yamins et al., 2014). What is more, high-throughput computational evaluation of candidate CNN models revealed a strong correlation between a model’s object recognition capability and its ability to predict IT cortex neural activity (Yamins et al., 2014). Accordingly, recent evidence revealed that the inclusion of components that closely predict the activity of the front-end of the visual stream (V1 area) improve the accuracy of CNNs by reducing their susceptibility to errors resulting from image perturbations, so-called white box adversarial attacks (Dapello et al., 2020). While these studies appear to suggest the merit of ‘‘brainlike’’ AI systems for visual processing, until recently there has been no method to objectively measure how ‘‘brain-like’’ an AI visual processing model is. In response to this concern, two novel metrics, the Brain-Score (BS) (Schrimpf et al., 2020) and the brain hierarchy (BH) score (Nonaka et al., 2020), have been created to assess the functional similarity between AI models and the human visual system. Specifically, the BS measures the ability of models to predict brain activity and behavior, whereas the BH is designed to evaluate the hierarchical homology across layers/areas between neural networks and the brain (Nonaka et al., 2020; Schrimpf et al., 2020). Interestingly, while evaluation of several commonly used AI visual processing models found a positive correlation between the BS and the accuracy for image recognition (i.e., brain-like neural networks performed better), the opposite result was found when the BH was used (Nonaka et al., 2020; Schrimpf et al., 2020). Although these findings appear to contradict each other, more recently developed high-performance neural networks tended to have a lower BS, suggesting that AI vision may now be diverging from human vision (Schrimpf et al., 2020). Importantly, and particularly for the BS, it should be considered that while the ability of a model to predict brain activity may indicate its functional similarity, it does not necessarily mean that the model is emulating actual brain mechanisms. In fact, statisticians have long stressed this importance of the distinction between explanatory and predictive modeling (Shmueli, 2010). Thus, if we intend to use AI systems to model and possibly inform our understanding of visual processing in the brain, it is important that we continue to increase the structural and mechanistic correspondence of AI models to their counterpart systems within the brain, as well as strengthening the ability of metrics to measure such correspondence. Indeed, considering the known complexity of the brain’s visual system, including the existence of multiple cell types (Gonchar, Wang, & Burkhalter, 2008; Pfeffer, Xue, He, Huang, & Scanziani, 2013) that are modulated by various neurotransmitters (Azimi et al., 2020; Noudoost & Moore, 2011), it is likely that comparatively simplistic artificial neural networks do not yet come close to fully modeling the myriad of processes contributing to biological visual processing. Finally, in addition to their utility for image recognition, braininspired neural networks are now beginning to be applied to innovative and practical uses within the field of visual neuroscience research. One example of this is the recent use of an artificial neural network to design precise visual patterns that can be projected directly onto the retina of primates to accurately control the activity of individual or groups of ventral stream (V4 area) neurons (Bashivan, Kar, & DiCarlo, 2019). These findings indicate the potential of this method for non-invasive control of neural activity in the visual cortex, creating a powerful tool for neuroscientists. In the next section, we further describe how AI is now increasingly being utilized for the advancement of neuroscience research, including in the objective analysis of animal behavior and its neural basis. 4. AI for analyzing behavior and its neural correlates Understanding the relationship between neural activity and behavior is a critical goal of neuroscience. Recently developed large-scale neural imaging techniques have now enabled huge quantities of data to be collected during behavioral tasks in animals (Ahrens & Engert, 2015; Cardin, Crair, & Higley, 2020; Weisenburger & Vaziri, 2016; Yang & Yuste, 2017). However, given the quantity and speed of individual movements animals perform during behavioral tasks, as well as the difficulty in identifying individual neurons among large and crowded neural imaging datasets, it has been challenging for researchers to effectively and objectively analyze animal behavior and its precise neural correlates (Berman, 2018; Giovannucci et al., 2019; von Ziegler, Sturman, & Bohacek, 2021). To address difficulties in human labeling of animal behavior, researchers have turned to AI for help. Over the last few years, several open-source, deep learning-based software toolboxes have been developed for 3D markerless pose estimation across several species and types of behaviors (Arac, Zhao, Dobkin, Carmichael, & Golshani, 2019; Forys, Xiao, Gupta, & Murphy, 2020; Graving et al., 2019; Günel et al., 2019; Mathis et al., 2018; Nath et al., 2019; Pereira et al., 2019). Perhaps the most widely used of these has been DeepLabCut, a deep neural network that incorporates the feature detectors from DeeperCut, a multi-person pose estimation model, and is able to accurately estimate the pose of several commonly used laboratory animals with minimal training (Lauer et al., 2021; Mathis et al., 2018; Nath et al., 2019). This pose estimation data can then be combined with various supervised machine learning tools, including JAABA (Kabra, Robie, Rivera-Alba, Branson, & Branson, 2013) and SimBA (Nilsson et al., 2020) that allow for automated identification of specific behaviors labeled by humans, such as grooming, freezing, and various social behaviors. This combined use of such tools has been shown to be able to match human ability for accurate quantification of several types of behaviors, and can outperform commercially available animal-tracking software packages (Sturman et al., 2020). In addition to supervised machine learning analysis of animal behavioral data, several unsupervised machine learning tools have been developed, including MotionMapper (Berman, Choi, Bialek, & Shaevitz, 2014), MoSeq (Wiltschko et al., 2015), and more recently, uBAM (Brattoli et al., 2021). These unsupervised approaches allow objective classification of the full repertoire of animal behavior and can potentially uncover subtle behavioral traits that might be missed by humans (Kwok, 2019). 606 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 As with animal behavioral data, human annotation of animal neural imaging datasets acquired from large-scale recording of neural activity, such as that acquired using in-vivo imaging of neural activity markers such as calcium indicators, is time consuming and suffers from a large degree of variability between annotators in the segmentation of individual neurons (Giovannucci et al., 2019). In the last decade, several tools utilizing classic machine learning and deep learning approaches have been developed to assist the analysis of animal calcium imaging data through the automated detection and quantification of individual neuron activity, known as source extraction (Pnevmatikakis, 2019). The most widely used of these have been unsupervised machine learning approaches employing activity-based segmentation algorithms, including principal component and independent component analysis (PCA/ICA) (Mukamel, Nimmerjahn, & Schnitzer, 2009), variations of constrained non-negative matrix factorization (CNMF) (Friedrich, Giovannucci, & Pnevmatikakis, 2021; Guan et al., 2018; Pnevmatikakis et al., 2016; Zhou et al., 2018), and dictionary learning (Giovannucci et al., 2017; Petersen, Simon, & Witten, 2017), to extract signals of neuron-like regions of interest from the background. While these techniques offer the benefit that they require no training and thus can be applied to analysis of various cell types and even dendritic imaging, they often suffer from false positives and are unable to identify lowactivity neurons, making it difficult to longitudinally track the activity of neurons that may be temporally inactive in certain contexts (Lu et al., 2018). To address this limitation, several supervised deep learning approaches that segment neurons based upon features learned from human-labeled calcium imaging datasets have been developed (Apthorpe et al., 2016; Denis, Dard, Quiroli, Cossart, & Picardo, 2020; Giovannucci et al., 2019; Klibisz, Rose, Eicholtz, Blundon, & Zakharenko, 2017; Soltanian-Zadeh, Sahingur, Blau, Gong, & Farsiu, 2019; Xu, Su, Zhut, Guan, & Zhangt, 2016). Many of these tools, including U-Net2DS (Klibisz et al., 2017), STNeuroNet (Soltanian-Zadeh et al., 2019), and DeepCINAC (Denis et al., 2020), train a CNN to segment neurons in either 2D or 3D space and have been demonstrated to be able to detect neurons with near-human accuracy, and outperform other techniques including PCA/ICA, allowing for accurate, fast, and reproducible neural detection and classification (Apthorpe et al., 2016; Giovannucci et al., 2019; Mukamel et al., 2009). Finally, efforts are now being made to combine AI analysis of animal behavior and neural imaging data, not only for automated mapping of behavior to its neural correlates, but in order to predict and model animal behavior based on analyzed neural activity data. One such recently developed system is BehaveNet, a probabilistic framework for the unsupervised analysis of behavioral video, with semi-supervised decoding of neural activity (Batty et al., 2019). The resulting generative models of this framework are able to decode animal neural activity data and create probabilistic full-resolution video simulations of behavior (Batty et al., 2019). Further development of technologies designed to automate the mapping of neural activity patterns with behavioral motifs may help to elucidate how discrete patterns of neural activity are related to specific movements (Musall et al., 2019). While the studies here describe approaches to analyzing and modeling healthy behavior and brain activity in animals, efforts have also been made to use AI to understand and identify abnormal brain functioning. In the next section, we discuss AI-based approaches to objective classification of psychiatric disorders, and how deep learning approaches have been utilized for modeling such disorders in artificial neural networks. 5. The interface between AI and psychiatry Despite the adoption of standardized diagnostic criteria in clinical manuals such as the Diagnostic and Statistical Manual of Mental Disorders (DSM) and the International Classification of Disease (ICD), psychiatric and developmental disorders are still primarily identified based upon a patient’s subjective behavioral symptoms and self-report measures. Not only is this method often unreliable due to its subjectivity (Wakefield, 2016), but it also leads to an explanatory gap between phenomenology and neurobiology. However, in the last few decades, huge advancements in the power of computing, alongside the collection of large neuroimaging datasets, have allowed researchers to begin to bridge this gap by using AI to identify, model, and potentially even treat psychiatric and developmental disorders. One area of particular promise has been the use of AI for the objective identification of brain disorders. Using machine learning methods, classifiers have been built to predict diagnostic labels of psychiatric and developmental disorders (see Bzdok & Meyer-Lindenberg, 2017; Cho et al., 2019; Zhou et al., 2020 for review). The scores produced by these probabilistic classifiers provide a degree of classification certainty that can be interpreted as a neural liability for the disorder and represent new biological dimensions of the disorders. However, while many of these classifiers, including those for schizophrenia (Greenstein, Malley, Weisinger, Clasen, & Gogtay, 2012; Orrù, Pettersson-Yeo, Marquand, Sartori, & Mechelli, 2012; Yassin et al., 2020) and ASD (Eslami, Almuqhim, Raiker, & Saeed, 2021; Yassin et al., 2020), are able to accurately identify the intended disorder, a major criticism has been that they are often only validated in a single sample cohort. To address this issue, recent attempts have been made to establish robust classifiers using larger and more varied sample data. This has led to the identification of classifiers for ASD and schizophrenia that could be generalized to independent cohorts, regardless of ethnicity, country, and MRI vendor, and still demonstrated classification accuracies of between 61%–76% (Yamada et al., 2017; Yoshihara et al., 2020). Asides from machine learning, deep learning approaches have also been applied to the classification of psychiatric and developmental disorders (see Durstewitz et al., 2019; Koppe et al., 2021, for review). A major advantage of deep neural networks is that their multi-layered design makes them particularly suited to learning high-level representations from complex raw data, allowing features to extracted from neuroimaging data with far less parameters than machine learning architectures (Durstewitz et al., 2019; Jang, Plis, Calhoun, & Lee, 2017; Koppe et al., 2021; Plis et al., 2014; Schmidhuber, 2015). Accordingly, in the last few years, several deep neural networks have been reported to effectively classify brain disorders from neuroimaging data, including schizophrenia (Oh, Oh, Lee, Chae, & Yun, 2020; Sun et al., 2021; Yan et al., 2019; Zeng et al., 2018), autism (Guo et al., 2017; Heinsfeld, Franco, Craddock, Buchweitz, & Meneguzzi, 2018; Misman et al., 2019; Raj & Masood, 2020), ADHD (Chen, Li, et al., 2019; Chen, Song, & Li, 2019; Dubreuil-Vall, Ruffini, & Camprodon, 2020), and depression (Li, La, Wang, Hu, & Zhang, 2020; Uyulan et al., 2020). Further development of AI models for data-driven, dimensional psychiatry will likely help to address the current discontent surrounding categorical diagnostic criteria. In addition to classification of disorders based on neuroimaging data, AI is also increasingly being used to model various psychiatric and developmental disorders (see Lanillos et al., 2020) for in-depth reviews). This largely began in the 1980s and 90s with studies modeling schizophrenia and ASD using artificial neural networks (Cohen, 1994; Cohen & Servan-Schreiber, 1992; Hoffman, 1987; Horn & Ruppin, 1995). Many of these models were inspired by biological evidence of structural and synaptic abnormalities associated with particular psychiatric disorder 607 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 symptoms. For example, evidence of reduced metabolism in the frontal cortex (Feinberg, 1983; Feinberg, Koresko, & Gottlieb, 1965; Feinberg, Koresko, Gottlieb, & Wender, 1964) and aberrant synaptic regeneration of abnormal brain structures (Stevens, 1992) have prompted the creation of neural networks designed to simulate how synaptic pruning (Hoffman & Dobscha, 1989) and reactive synaptic reorganization (Horn & Ruppin, 1995; Ruppin, Reggia, & Horn, 1996) may explain delusions and hallucinations in schizophrenia patients. Similarly, neural network models of excessive or reduced neuronal connections (Cohen, 1994, 1998; Thomas, Knowland, & Karmiloff-Smith, 2011) were generated to model biological observations of abnormal neuronal density in cortical, limbic and cerebellar regions (Bailey et al., 1998; Bauman, 1991; Bauman & Kemper, 1985) hypothesized to contribute to developmental regression in ASD. Excitingly, more recently, deep learning models, including high-dimensional RNN models of schizophrenia (Yamashita & Tani, 2012) and ASD (Idei et al., 2017, 2018), have begun to be implemented into robots, allowing direct observation and comparison of modeled behavior with that seen in patients. Finally, in the near future, AI could begin to play an important role in the treatment of psychiatric and developmental disorders. Computer-assisted therapy (CAT), including AI chatbots delivering cognitive behavioral therapies, is beginning to be tested for treatment of psychiatric disorders including depression and anxiety (Carroll & Rounsaville, 2010; Fitzpatrick, Darcy, & Vierhile, 2017; Fulmer, Joerin, Gentile, Lakerink, & Rauws, 2018). While still in their infancy, these CATs offer distinct advantages to human-led therapies in terms of price and accessibility, although their effectiveness in comparison to currently used therapeutic methods has yet to be robustly measured. Additionally, the identification of neuroimaging-based classifiers for psychiatric disorders (described above) has inspired the launch of a real-time fMRI-based neurofeedback projects in which patients attempt to normalize their own brain connectivity pattern through neurofeedback. Meta-analyses of such studies have indicated neurofeedback treatments to result in significant amelioration of the symptoms of several disorders including schizophrenia, depression, anxiety disorder, and ASD, suggesting the potential benefit of further use of such digital treatments (Dudek & Dodell-Feder, 2020; Schoenberg & David, 2014). 6. Conclusions Since the inception of AI research midway through the last century, the brain has served as the primary source of inspiration for the creation of artificial systems of intelligence. This is largely based upon the reasoning that the brain is proof of concept of a comprehensive intelligence system capable of perception, planning, and decision making, and therefore offers an attractive template for the design of AI. In this review, based upon topics presented at the 2020 International Symposium on Artificial Intelligence and Brain Science, we have discussed how braininspired mechanistic, structural, and functional elements are being utilized to create novel, and optimize existing, AI systems. In particular, this has led to the development of high-dimensional deep neural networks often incorporating hierarchical architectures inspired by those found in the brain and capable of feats of intelligence including visual object recognition and memorybased cognitive tasks. Advancements in AI have also helped to foster progress within the field of neuroscience. Here we have described how the use of machine learning and neural networks for automated analysis of big data has revolutionized the analysis of animal behavioral and neuroimaging studies, as well as being utilized for objective classification of psychiatric and developmental disorders. Importantly, while it has not been discussed in great detail in the current review, it should be considered that the relationship between AI and neuroscience is not simply two-way, but rather also includes the field of cognitive science (see Battleday, Peterson, & Griffiths, 2021; Cichy & Kaiser, 2019; Forbus, 2010; Kriegeskorte & Douglas, 2018, for review). Indeed, over the years, much of AI research has been guided by theories of brain functioning established by cognitive scientists (Elman, 1990; Hebb, 1949; Rumelhart & McClelland, 1986). For example, the convolutional neural networks discussed earlier in this review (in the section on visual processing) were inspired in part by computational models of cognition within the brain, including principles such as nonlinear feature maps and pooling of inputs, which were themselves derived from observations from neurophysiological studies in animals (Battleday et al., 2021; Fukushima, 1980; Hubel & Wiesel, 1962; Mozer, 1987; Riesenhuber & Poggio, 1999). In turn, neural networks have been used to guide new cognitive models of intellectual abilities, including perception, memory, and language, giving rise to the connectionism movement within cognitive science (Barrow, 1996; Fodor & Pylyshyn, 1988; Mayor, Gomez, Chang, & Lupyan, 2014; Yamins & DiCarlo, 2016). If we are to use AI to model and potentially elucidate brain functioning, the primary focus of cognitive science, it is important that we continue to not only use biological data from neuroscience studies, but also cognitive models, to inspire the architectural, mechanistic, and algorithmic design of artificial neural networks. Despite their accomplishments and apparent complexity, current AI systems are still remarkably simplistic in comparison to brain networks and in many cases still lack the ability to accurately model brain functions (Bae, Kim, & Kim, 2021; Barrett, Morcos, & Macke, 2019; Hasson et al., 2020; Pulvermüller, Tomasello, Henningsen-Schomers, & Wennekers, 2021; Tang et al., 2019). A major limitation is that, in general, current models are still not able to model the brain at multiple levels; from synaptic reorganization and the influence of neurotransmitter and hormone neuromodulation of neuronal excitability at the microlevel, to large-scale synchronization of spiking activity and global connectivity at the macrolevel. In fact, integration of various AI models of brain functioning, including the models of the cerebral cortex described in this review, as well as models of other brain regions, including limbic and motor control regions (Kowalczuk & Czubenko, 2016; Merel, Botvinick, & Wayne, 2019; Parsapoor, 2016), remains one of the greatest challenges in the creation of an AGI system capable of modeling the entire brain. In spite of these difficulties, it is clear that the continued interaction between neuroscience and AI will undoubtedly expedite progress in both areas. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This is supported by MEXT Grant-in-Aid Scientific Research on Innovative Areas ‘‘Correspondence and Fusion of Artificial Intelligence and Brain Science’’ (JP16H06568 to TH and TM, JP16H06572 to HT). References Ahrens, M. B., & Engert, F. (2015). Large-scale imaging in small brains. Current Opinion in Neurobiology, 32, 78–86. http://dx.doi.org/10.1016/j.conb.2015.01. 007. 608 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 Amit, D. J., Bernacchia, A., & Yakovlev, V. (2003). Multiple-object working memory—A model for behavioral performance. Cerebral Cortex, 13(5), 435–443. http://dx.doi.org/10.1093/cercor/13.5.435. Amit, D. J., & Brunel, N. (1995). Learning internal representations in an attractor neural network with analogue neurons. Network. Computation in Neural Systems, 6(3), 359–388. http://dx.doi.org/10.1088/0954-898x_6_3_004. Apthorpe, N. J., Riordan, A. J., Aguilar, R. E., Homann, J., Gu, Y., Tank, D. W., et al. (2016). Automatic neuron detection in calcium imaging data using convolutional networks. ArXiv. Arac, A., Zhao, P., Dobkin, B. H., Carmichael, S. T., & Golshani, P. (2019). DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Frontiers in Systems Neuroscience, 13, 20. http://dx.doi.org/10.3389/fnsys.2019.00020. Azimi, Z., Barzan, R., Spoida, K., Surdin, T., Wollenweber, P., Mark, M. D., et al. (2020). Separable gain control of ongoing and evoked activity in the visual cortex by serotonergic input. ELife, 9, Article e53552. http://dx.doi.org/10. 7554/elife.53552. Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4(10), 829–839. http://dx.doi.org/10.1038/nrn1201. Bae, H., Kim, S. J., & Kim, C.-E. (2021). Lessons from deep neural networks for studying the coding principles of biological neural networks. Frontiers in Systems Neuroscience, 14, Article 615129. http://dx.doi.org/10.3389/fnsys. 2020.615129. Bailey, A., Luthert, P., Dean, A., Harding, B., Janota, I., Montgomery, M., et al. (1998). A clinicopathological study of autism. Brain, 121(5), 889–905. http://dx.doi.org/10.1093/brain/121.5.889. Barrett, D. G., Morcos, A. S., & Macke, J. H. (2019). Analyzing biological and artificial neural networks: Challenges with opportunities for synergy? Current Opinion in Neurobiology, 55, 55–64. http://dx.doi.org/10.1016/j.conb.2019.01. 007. Barrow, H. (1996). Connectionism and neural networks. In M. A. Boden (Ed.), Artificial intelligence: Handbook of perception and cognition. Academic Press, http://dx.doi.org/10.1016/b978-012161964-0/50007-8. Bashivan, P., Kar, K., & DiCarlo, J. J. (2019). Neural population control via deep image synthesis. Science, 364(6439), Article eaav9436. http://dx.doi.org/10. 1126/science.aav9436. Bastos, A. M., Loonis, R., Kornblith, S., Lundqvist, M., & Miller, E. K. (2018). Laminar recordings in frontal cortex suggest distinct layers for maintenance and control of working memory. Proceedings of the National Academy of Sciences, 115(5), 1117–1122. http://dx.doi.org/10.1073/pnas.1710323115. Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., & Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4), 695–711. http://dx.doi.org/10.1016/j.neuron.2012.10.038. Battleday, R. M., Peterson, J. C., & Griffiths, T. L. (2021). From convolutional neural networks to models of higher-level cognition (and back again). Annals of the New York Academy of Sciences, http://dx.doi.org/10.1111/nyas.14593. Batty, E., Whiteway, M., Saxena, S., Biderman, D., Abe, T., Musall, S., et al. (2019). BehaveNet: Nonlinear embedding and Bayesian neural decoding of behavioral videos. In 33rd conference on neural information processing systems. Bauman, M. L. (1991). Microscopic neuroanatomic abnormalities in autism. Pediatrics, 87(5 Pt 2), 791–796. Bauman, M., & Kemper, T. L. (1985). Histoanatomic observations of the brain in early infantile autism. Neurology, 35(6), 866. http://dx.doi.org/10.1212/wnl. 35.6.866. Beiser, D. G., & Houk, J. C. (1998). Model of cortical-basal ganglionic processing: Encoding the serial order of sensory events. Journal of Neurophysiology, 79(6), 3168–3188. http://dx.doi.org/10.1152/jn.1998.79.6.3168. Bengio, Y. (1993). A connectionist approach to speech recognition. International Journal of Pattern Recognition and Artificial Intelligence, 07(04), 647–667. http://dx.doi.org/10.1142/s0218001493000327. Berman, G. J. (2018). Measuring behavior across scales. BMC Biology, 16(1), 23. http://dx.doi.org/10.1186/s12915-018-0494-7. Berman, G. J., Choi, D. M., Bialek, W., & Shaevitz, J. W. (2014). Mapping the stereotyped behaviour of freely moving fruit flies. Journal of the Royal Society Interface, 11(99), Article 20140672. http://dx.doi.org/10.1098/rsif.2014.0672. Bird, C. M., & Burgess, N. (2008). The hippocampus and memory: Insights from spatial processing. Nature Reviews Neuroscience, 9(3), 182–194. http: //dx.doi.org/10.1038/nrn2335. Blumenfeld, R. S., & Ranganath, C. (2007). Prefrontal cortex and long-term memory encoding: An integrative review of findings from neuropsychology and neuroimaging. The Neuroscientist, 13(3), 280–291. http://dx.doi.org/10. 1177/1073858407299290. Boran, E., Fedele, T., Klaver, P., Hilfiker, P., Stieglitz, L., Grunwald, T., et al. (2019). Persistent hippocampal neural firing and hippocampal-cortical coupling predict verbal working memory load. Science Advances, 5(3), Article eaav3687. http://dx.doi.org/10.1126/sciadv.aav3687. Botvinick, M. M., & Plaut, D. C. (2006). Short-term memory for serial order: A recurrent neural network model. Psychological Review, 113(2), 201–233. http://dx.doi.org/10.1037/0033-295x.113.2.201. Brattoli, B., Büchler, U., Dorkenwald, M., Reiser, P., Filli, L., Helmchen, F., et al. (2021). Unsupervised behaviour analysis and magnification (uBAM) using deep learning. Nature Machine Intelligence, 3(6), 495–506. http://dx.doi.org/ 10.1038/s42256-021-00326-x. Bzdok, D., & Meyer-Lindenberg, A. (2017). Machine learning for precision psychiatry: Opportunities and challenges. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(3), 223–230. http://dx.doi.org/10.1016/j. bpsc.2017.11.007. Cadieu, C. F., Hong, H., Yamins, D. L. K., Pinto, N., Ardila, D., Solomon, E. A., et al. (2014). Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Computational Biology, 10(12), Article e1003963. http://dx.doi.org/10.1371/journal.pcbi.1003963. Cardin, J. A., Crair, M. C., & Higley, M. J. (2020). Mesoscopic imaging: Shining a wide light on large-scale neural dynamics. Neuron, 108(1), 33–43. http: //dx.doi.org/10.1016/j.neuron.2020.09.031. Carroll, K. M., & Rounsaville, B. J. (2010). Computer-assisted therapy in psychiatry: Be brave—It’s a new world. Current Psychiatry Reports, 12(5), 426–432. http://dx.doi.org/10.1007/s11920-010-0146-2. Chaudhuri, R., Knoblauch, K., Gariel, M.-A., Kennedy, H., & Wang, X.-J. (2015). A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex. Neuron, 88(2), 419–431. http://dx.doi.org/10.1016/j.neuron. 2015.09.008. Chen, M., Li, H., Wang, J., Dillman, J. R., Parikh, N. A., & He, L. (2019). A multichannel deep neural network model analyzing multiscale functional brain connectome data for attention deficit hyperactivity disorder detection. Radiology: Artificial Intelligence, 2(1), Article e190012. http://dx.doi.org/10. 1148/ryai.2019190012. Chen, H., Song, Y., & Li, X. (2019). A deep learning framework for identifying children with ADHD using an EEG-based brain network. Neurocomputing, 356, 83–96. http://dx.doi.org/10.1016/j.neucom.2019.04.058. Cho, G., Yim, J., Choi, Y., Ko, J., & Lee, S.-H. (2019). Review of machine learning algorithms for diagnosing mental illness. Psychiatry Investigation, 16(4), 262–269. http://dx.doi.org/10.30773/pi.2018.12.21.2. Christophel, T. B., Klink, P. C., Spitzer, B., Roelfsema, P. R., & Haynes, J.-D. (2017). The distributed nature of working memory. Trends in Cognitive Sciences, 21(2), 111–124. http://dx.doi.org/10.1016/j.tics.2016.12.007. Cichy, R. M., & Kaiser, D. (2019). Deep neural networks as scientific models. Trends in Cognitive Sciences, 23(4), 305–317. http://dx.doi.org/10.1016/j.tics. 2019.01.009. Cohen, I. L. (1994). An artificial neural network analogue of learning in autism. Biological Psychiatry, 36(1), 5–20. http://dx.doi.org/10.1016/0006-3223(94) 90057-4. Cohen, I. L. (1998). In D. J. Stein, & J. Ludik (Eds.), Neural network analysis of learning in autism. Cambridge University Press. Cohen, J. D., & Servan-Schreiber, D. (1992). Context, cortex, and dopamine: A connectionist approach to behavior and biology in schizophrenia. Psychological Review, 99(1), 45–77. http://dx.doi.org/10.1037/0033-295x.99.1.45. Crick, F. (1989). The recent excitement about neural networks. Nature, 337(6203), 129–132. http://dx.doi.org/10.1038/337129a0. Dabney, W., Kurth-Nelson, Z., Uchida, N., Starkweather, C. K., Hassabis, D., Munos, R., et al. (2020). A distributional code for value in dopaminebased reinforcement learning. Nature, 577(7792), 671–675. http://dx.doi.org/ 10.1038/s41586-019-1924-6. Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D. D., & DiCarlo, J. J. (2020). Simulating a primary visual cortex at the front of CNNs improves robustness to image perturbations. http://dx.doi.org/10.1101/2020.06.16.154542, BioRxiv, 2020.06.16.154542. Denis, J., Dard, R. F., Quiroli, E., Cossart, R., & Picardo, M. A. (2020). DeepCINAC: A deep-learning-based Python toolbox for inferring calcium imaging neuronal activity based on movie visualization. ENeuro, 7(4), http://dx.doi.org/10.1523/ eneuro.0038-20.2020, ENEURO.0038-20.2020. Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U., & Neftci, E. (2016). Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. ArXiv. Diesmann, M., Gewaltig, M.-O., & Aertsen, A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402(6761), 529–533. http://dx.doi.org/10.1038/990101. Dubreuil-Vall, L., Ruffini, G., & Camprodon, J. A. (2020). Deep learning convolutional neural networks discriminate adult ADHD from healthy individuals on the basis of event-related spectral EEG. Frontiers in Neuroscience, 14, 251. http://dx.doi.org/10.3389/fnins.2020.00251. Dudek, E., & Dodell-Feder, D. (2020). The efficacy of real-time functional magnetic resonance imaging neurofeedback for psychiatric illness: A metaanalysis of brain and behavioral outcomes. Neuroscience & Biobehavioral Reviews, 121, 291–306. http://dx.doi.org/10.1016/j.neubiorev.2020.12.020. Durstewitz, D., Koppe, G., & Meyer-Lindenberg, A. (2019). Deep neural networks in psychiatry. Molecular Psychiatry, 24(11), 1583–1598. http://dx.doi.org/10. 1038/s41380-019-0365-9. Durstewitz, D., Seamans, J. K., & Sejnowski, T. J. (2000). Neurocomputational models of working memory. Nature Neuroscience, 3(Suppl. 11), 1184–1191. http://dx.doi.org/10.1038/81460. 609 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 Eichenbaum, H. (2017). Prefrontal–hippocampal interactions in episodic memory. Nature Reviews Neuroscience, 18(9), 547–558. http://dx.doi.org/10.1038/nrn. 2017.74. Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211. http://dx.doi.org/10.1016/0364-0213(90)90002-e. Eslami, T., Almuqhim, F., Raiker, J. S., & Saeed, F. (2021). Machine learning methods for diagnosing autism spectrum disorder and attentiondeficit/hyperactivity disorder using functional and structural MRI: A survey. Frontiers in Neuroinformatics, 14, Article 575999. http://dx.doi.org/10.3389/ fninf.2020.575999. Federer, C., Xu, H., Fyshe, A., & Zylberberg, J. (2020). Improved object recognition using neural networks trained to mimic the brain’s statistical properties. Neural Networks, 131, 103–114. http://dx.doi.org/10.1016/j.neunet.2020.07. 013. Feinberg, I. (1983). Schizophrenia: Caused by a fault in programmed synaptic elimination during adolescence? Journal of Psychiatric Research, 17(4), 319–334. http://dx.doi.org/10.1016/0022-3956(82)90038-3. Feinberg, I., Koresko, R. L., & Gottlieb, F. (1965). Further observations on electrophysiological sleep patterns in schizophrenia. Comprehensive Psychiatry, 6(1), 21–24. http://dx.doi.org/10.1016/s0010-440x(65)80004-9. Feinberg, I., Koresko, R. L., Gottlieb, F., & Wender, P. H. (1964). Sleep electroencephalographic and eye-movement patterns in schizophrenic patients. Comprehensive Psychiatry, 5(1), 44–53. http://dx.doi.org/10.1016/s0010-440x(64) 80042-0. Fiebig, F., & Lansner, A. (2017). A spiking working memory model based on hebbian short-term potentiation. Journal of Neuroscience, 37(1), 83–96. http: //dx.doi.org/10.1523/jneurosci.1989-16.2016. Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): A randomized controlled trial. JMIR Mental Health, 4(2), Article e19. http://dx.doi.org/10.2196/mental. 7785. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71. http://dx.doi.org/10.1016/0010- 0277(88)90031-5. Forbus, K. D. (2010). AI and cognitive science: The past and next 30 years. Topics in Cognitive Science, 2(3), 345–356. http://dx.doi.org/10.1111/j.1756- 8765.2010.01083.x. Forys, B. J., Xiao, D., Gupta, P., & Murphy, T. H. (2020). Real-time selective markerless tracking of forepaws of head fixed mice using deep neural networks. ENeuro, 7(3), http://dx.doi.org/10.1523/eneuro.0096-20.2020, ENEURO.0096-20.2020. Friedrich, J., Giovannucci, A., & Pnevmatikakis, E. A. (2021). Online analysis of microendoscopic 1-photon calcium imaging data streams. PLoS Computational Biology, 17(1), Article e1008565. http://dx.doi.org/10.1371/journal.pcbi. 1008565. Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193–202. http://dx.doi.org/10.1007/bf00344251. Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Mental Health, 5(4), Article e64. http://dx.doi.org/10.2196/mental.9782. Fuster, J. M., & Alexander, G. E. (1971). Neuron activity related to short-term memory. Science, 173(3997), 652–654. http://dx.doi.org/10.1126/science.173. 3997.652. Giovannucci, A., Friedrich, J., Gunn, P., Kalfon, J., Brown, B. L., Koay, S. A., et al. (2019). CaImAn an open source tool for scalable calcium imaging data analysis. ELife, 8, Article e38173. http://dx.doi.org/10.7554/elife.38173. Giovannucci, A., Friedrich, J., Kaufman, M., Churchland, A., Chklovskii, D., Paninski, L., et al. (2017). OnACID: Online analysis of calcium imaging data in real time*. http://dx.doi.org/10.1101/193383, BioRxiv, 193383. Goldman-Rakic, P. S. (1995). Cellular basis of working memory. Neuron, 14(3), 477–485. http://dx.doi.org/10.1016/0896-6273(95)90304-6. Gonchar, Y., Wang, Q., & Burkhalter, A. H. (2008). Multiple distinct subtypes of GABAergic neurons in mouse visual cortex identified by triple immunostaining. Frontiers in Neuroanatomy, 2, 3. http://dx.doi.org/10.3389/neuro.05.003. 2007. Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., et al. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. ELife, 8, Article e47994. http://dx.doi.org/10.7554/elife. 47994. Greenstein, D., Malley, J. D., Weisinger, B., Clasen, L., & Gogtay, N. (2012). Using multivariate machine learning methods and structural MRI to classify childhood onset schizophrenia and healthy controls. Frontiers in Psychiatry, 3, 53. http://dx.doi.org/10.3389/fpsyt.2012.00053. Grossberg, S. (1987). Competitive learning: From interactive activation to adaptive resonance. Cognitive Science, 11(1), 23–63. http://dx.doi.org/10.1111/j. 1551-6708.1987.tb00862.x. Guan, J., Li, J., Liang, S., Li, R., Li, X., Shi, X., et al. (2018). NeuroSeg: Automated cell detection and segmentation for in vivo two-photon Ca2+ imaging data. Brain Structure and Function, 223(1), 519–533. http://dx.doi.org/10.1007/s00429- 017-1545-5. Günel, S., Rhodin, H., Morales, D., Campagnolo, J., Ramdya, P., & Fua, P. (2019). DeepFly3D, a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. ELife, 8, Article e48571. http://dx.doi. org/10.7554/elife.48571. Guo, X., Dominick, K. C., Minai, A. A., Li, H., Erickson, C. A., & Lu, L. J. (2017). Diagnosing autism spectrum disorder from brain resting-state functional connectivity patterns using a deep neural network with a novel feature selection method. Frontiers in Neuroscience, 11, 460. http://dx.doi.org/10. 3389/fnins.2017.00460. Han, B., Sengupta, A., & Roy, K. (2016). On the energy benefits of spiking deep neural networks: A case study (special session paper). In 2016 international joint conference on neural networks (pp. 971–976). http://dx.doi.org/10.1109/ ijcnn.2016.7727303. Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscienceinspired artificial intelligence. Neuron, 95(2), 245–258. http://dx.doi.org/10. 1016/j.neuron.2017.06.011. Hasson, U., Nastase, S. A., & Goldstein, A. (2020). Direct fit to nature: An evolutionary perspective on biological and artificial neural networks. Neuron, 105(3), 416–434. http://dx.doi.org/10.1016/j.neuron.2019.12.002. Hazy, T. E., Frank, M. J., & O’Reilly, R. C. (2007). Towards an executive without a homunculus: Computational models of the prefrontal cortex/basal ganglia system. Philosophical Transactions of the Royal Society, Series B (Biological Sciences), 362(1485), 1601–1613. http://dx.doi.org/10.1098/rstb.2007.2055. Hebb, D. O. (1949). The organization of behavior. Wiley & Sons. Heinsfeld, A. S., Franco, A. R., Craddock, R. C., Buchweitz, A., & Meneguzzi, F. (2018). Identification of autism spectrum disorder using deep learning and the ABIDE dataset. NeuroImage: Clinical, 17, 16–23. http://dx.doi.org/10.1016/ j.nicl.2017.08.017. Hoffman, R. E. (1987). Computer simulations of neural information processing and the schizophrenia-mania dichotomy. Archives of General Psychiatry, 44(2), 178–188. http://dx.doi.org/10.1001/archpsyc.1987.01800140090014. Hoffman, R. E., & Dobscha, S. K. (1989). Cortical pruning and the development of schizophrenia: A computer model. Schizophrenia Bulletin, 15(3), 477–490. http://dx.doi.org/10.1093/schbul/15.3.477. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8), 2554–2558. http://dx.doi.org/10.1073/pnas.79.8.2554. Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science, 340(6132), 639–642. http://dx.doi. org/10.1126/science.1234330. Horn, D., & Ruppin, E. (1995). Compensatory mechanisms in an attractor neural network model of schizophrenia. Neural Computation, 7(1), 182–205. http: //dx.doi.org/10.1162/neco.1995.7.1.182. Huang, Q.-S., & Wei, H. (2021). A computational model of working memory based on spike-timing-dependent plasticity. Frontiers in Computational Neuroscience, 15, Article 630999. http://dx.doi.org/10.3389/fncom.2021.630999. Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal Physiology, 160(1), 106–154. http://dx.doi.org/10.1113/jphysiol.1962.sp006837. Idei, H., Murata, S., Chen, Y., Yamashita, Y., Tani, J., & Ogata, T. (2017). Reduced behavioral flexibility by aberrant sensory precision in autism spectrum disorder: A neurorobotics experiment. In 2017 joint IEEE international conference on development and learning and epigenetic robotics (pp. 271–276). http: //dx.doi.org/10.1109/devlrn.2017.8329817. Idei, H., Murata, S., Chen, Y., Yamashita, Y., Tani, J., & Ogata, T. (2018). A neurorobotics simulation of autistic behavior induced by unusual sensory precision. Computational Psychiatry, 2(0), 164–182. http://dx.doi.org/10.1162/ cpsy_a_00019. Jang, H., Plis, S. M., Calhoun, V. D., & Lee, J.-H. (2017). Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks. NeuroImage, 145(Pt B), 314–328. http://dx.doi.org/10.1016/j. neuroimage.2016.04.003. Kabra, M., Robie, A. A., Rivera-Alba, M., Branson, S., & Branson, K. (2013). JAABA: Interactive machine learning for automatic annotation of animal behavior. Nature Methods, 10(1), 64–67. http://dx.doi.org/10.1038/nmeth.2281. Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8(5), 679–685. http://dx.doi.org/10. 1038/nn1444. Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Computational Biology, 10(11), Article e1003915. http://dx.doi.org/10.1371/journal. pcbi.1003915. Kim, R., Li, Y., & Sejnowski, T. J. (2019). Simple framework for constructing functional spiking recurrent neural networks. Proceedings of the National Academy of Sciences, 116(45), 22811–22820. http://dx.doi.org/10.1073/pnas. 1905926116. 610 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 Kim, R., & Sejnowski, T. J. (2021). Strong inhibitory signaling underlies stable temporal dynamics and working memory in spiking neural networks. Nature Neuroscience, 24(1), 129–139. http://dx.doi.org/10.1038/s41593-020-00753- w. Klibisz, A., Rose, D., Eicholtz, M., Blundon, J., & Zakharenko, S. (2017). Fast, simple calcium imaging segmentation with fully convolutional networks. ArXiv. Koppe, G., Meyer-Lindenberg, A., & Durstewitz, D. (2021). Deep learning for small and big data in psychiatry. Neuropsychopharmacology, 46(1), 176–190. http://dx.doi.org/10.1038/s41386-020-0767-z. Kosslyn, S. M., Alpert, N. M., Thompson, W. L., Maljkovic, V., Weise, S. B., Chabris, C. F., et al. (1993). Visual mental imagery activates topographically organized visual cortex: PET investigations. Journal of Cognitive Neuroscience, 5(3), 263–287. http://dx.doi.org/10.1162/jocn.1993.5.3.263. Kosslyn, S. M., Thompson, W. L., Klm, I. J., & Alpert, N. M. (1995). Topographical representations of mental images in primary visual cortex. Nature, 378(6556), 496–498. http://dx.doi.org/10.1038/378496a0. Kowalczuk, Z., & Czubenko, M. (2016). Computational approaches to modeling artificial emotion – An overview of the proposed solutions. Frontiers in Robotics and AI, 3, 21. http://dx.doi.org/10.3389/frobt.2016.00021. Kriegeskorte, N., & Douglas, P. K. (2018). Cognitive computational neuroscience. Nature Neuroscience, 21(9), 1148–1160. http://dx.doi.org/10.1038/s41593- 018-0210-5. Krotov, D., & Hopfield, J. J. (2019). Unsupervised learning by competing hidden units. Proceedings of the National Academy of Sciences, 116(16), Article 201820458. http://dx.doi.org/10.1073/pnas.1820458116. Kwok, R. (2019). Deep learning powers a motion-tracking revolution. Nature, 574(7776), 137–138. http://dx.doi.org/10.1038/d41586-019-02942-5. Lanillos, P., Oliva, D., Philippsen, A., Yamashita, Y., Nagai, Y., & Cheng, G. (2020). A review on neural network models of schizophrenia and autism spectrum disorder. Neural Networks, 122, 338–363. http://dx.doi.org/10.1016/j.neunet. 2019.10.014. Lauer, J., Zhou, M., Ye, S., Menegas, W., Nath, T., Rahman, M. M., et al. (2021). Multi-animal pose estimation and tracking with DeepLabCut. http://dx.doi. org/10.1101/2021.04.30.442096, BioRxiv, 2021.04.30.442096. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. http://dx.doi.org/10.1038/nature14539. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., et al. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4), 541–551. http://dx.doi.org/10.1162/neco.1989.1.4.541. Li, X., La, R., Wang, Y., Hu, B., & Zhang, X. (2020). A deep learning approach for mild depression recognition based on functional connectivity using electroencephalography. Frontiers in Neuroscience, 14, 192. http://dx.doi.org/ 10.3389/fnins.2020.00192. Lillicrap, T. P., Cownden, D., Tweed, D. B., & Akerman, C. J. (2016). Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7(1), 13276. http://dx.doi.org/10.1038/ncomms13276. Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J., & Hinton, G. (2020). Backpropagation and the brain. Nature Reviews Neuroscience, 21(6), 335–346. http://dx.doi.org/10.1038/s41583-020-0277-3. Lisman, J. E., Fellous, J.-M., & Wang, X.-J. (1998). A role for NMDA-receptor channels in working memory. Nature Neuroscience, 1(4), 273–275. http: //dx.doi.org/10.1038/1086. Lu, J., Li, C., Singh-Alvarado, J., Zhou, Z. C., Fröhlich, F., Mooney, R., et al. (2018). MIN1PIPE: A miniscope 1-photon-based calcium imaging signal extraction pipeline. Cell Reports, 23(12), 3673–3684. http://dx.doi.org/10.1016/j.celrep. 2018.05.062. Lundqvist, M., Herman, P., & Miller, E. K. (2018). Working memory: Delay activity, yes! persistent activity? Maybe not. Journal of Neuroscience, 38(32), 7013–7019. http://dx.doi.org/10.1523/jneurosci.2485-17.2018. Lundqvist, M., Herman, P., Warden, M. R., Brincat, S. L., & Miller, E. K. (2018). Gamma and beta bursts during working memory readout suggest roles in its volitional control. Nature Communications, 9(1), 394. http://dx.doi.org/10. 1038/s41467-017-02791-8. Lundqvist, M., Rose, J., Herman, P., Brincat, S. L., Buschman, T. J., & Miller, E. K. (2016). Gamma and beta bursts underlie working memory. Neuron, 90(1), 152–164. http://dx.doi.org/10.1016/j.neuron.2016.02.028. Luo, W., Xing, J., Milan, A., Zhang, X., Liu, W., & Kim, T.-K. (2020). Multiple object tracking: A literature review. Artificial Intelligence, 293, Article 103448. http://dx.doi.org/10.1016/j.artint.2020.103448. Manchanda, S., & Sharma, S. (2016). Analysis of computer vision based techniques for motion detection. In 2016 6th international conference - cloud system and big data engineering (confluence) (pp. 445–450). http://dx.doi.org/ 10.1109/confluence.2016.7508161. Mandler, G. (2002). Origins of the cognitive (r)evolution. Journal of the History of the Behavioral Sciences, 38(4), 339–353. http://dx.doi.org/10.1002/jhbs.10066. Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Contextdependent computation by recurrent dynamics in prefrontal cortex. Nature, 503(7474), 78–84. http://dx.doi.org/10.1038/nature12742. Marder, E., Abbott, L. F., Turrigiano, G. G., Liu, Z., & Golowasch, J. (1996). Memory from the dynamics of intrinsic membrane currents. Proceedings of the National Academy of Sciences, 93(24), 13481–13486. http://dx.doi.org/10. 1073/pnas.93.24.13481. Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., et al. (2018). DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281–1289. http: //dx.doi.org/10.1038/s41593-018-0209-y. Mayor, J., Gomez, P., Chang, F., & Lupyan, G. (2014). Connectionism coming of age: Legacy and future challenges. Frontiers in Psychology, 5, 187. http: //dx.doi.org/10.3389/fpsyg.2014.00187. McFarland, J. L., & Fuchs, A. F. (1992). Discharge patterns in nucleus prepositus hypoglossi and adjacent medial vestibular nucleus during horizontal eye movement in behaving macaques. Journal of Neurophysiology, 68(1), 319–332. http://dx.doi.org/10.1152/jn.1992.68.1.319. Merel, J., Botvinick, M., & Wayne, G. (2019). Hierarchical motor control in mammals and machines. Nature Communications, 10(1), 5489. http://dx.doi. org/10.1038/s41467-019-13239-6. Miconi, T. (2017). Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks. ELife, 6, Article e20899. http://dx.doi.org/10.7554/elife.20899. Miller, E. K., Erickson, C. A., & Desimone, R. (1996). Neural mechanisms of visual working memory in prefrontal cortex of the macaque. Journal of Neuroscience, 16(16), 5154–5167. http://dx.doi.org/10.1523/jneurosci.16-16-05154.1996. Millidge, B., Tschantz, A., & Buckley, C. L. (2020). Predictive coding approximates backprop along arbitrary computation graphs. ArXiv. Minsky, M., & Papert, S. A. (1969). Perceptrons: An introduction to computational geometry. MIT Press. Misman, M. F., Samah, A. A., Ezudin, F. A., Majid, H. A., Shah, Z. A., Hashim, H., et al. (2019). Classification of adults with autism spectrum disorder using deep neural network. In 2019 1st international conference on artificial intelligence and data sciences (vol. 00) (pp. 29–34). http://dx.doi.org/10.1109/aidas47888. 2019.8970823. Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M., Morito, Y., Tanabe, H. C., et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 60(5), 915–929. http://dx.doi.org/10.1016/j.neuron.2008.11.004. Mongillo, G., Barak, O., & Tsodyks, M. (2008). Synaptic theory of working memory. Science, 319(5869), 1543–1546. http://dx.doi.org/10.1126/science. 1150769. Moravčík, M., Schmid, M., Burch, N., Lisý, V., Morrill, D., Bard, N., et al. (2017). DeepStack: Expert-level artificial intelligence in heads-up nolimit poker. Science, 356(6337), Article eaam6960. http://dx.doi.org/10.1126/ science.aam6960. Mozer, M. C. (1987). Early parallel processing in reading: A connectionist approach. In M. Coltheart (Ed.), Attention and performance 12: The psychology of reading. Lawrence Erlbaum Associates, Inc. Mukamel, E. A., Nimmerjahn, A., & Schnitzer, M. J. (2009). Automated analysis of cellular signals from large-scale calcium imaging data. Neuron, 63(6), 747–760. http://dx.doi.org/10.1016/j.neuron.2009.08.009. Musall, S., Urai, A. E., Sussillo, D., & Churchland, A. K. (2019). Harnessing behavioral diversity to understand neural computations for cognition. Current Opinion in Neurobiology, 58, 229–238. http://dx.doi.org/10.1016/j.conb.2019. 09.011. Nath, T., Mathis, A., Chen, A. C., Patel, A., Bethge, M., & Mathis, M. W. (2019). Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nature protocols, 14(7), 2152–2176. http://dx.doi.org/10.1038/ s41596-019-0176-0. Nilsson, S. R., Goodwin, N. L., Choong, J. J., Hwang, S., Wright, H. R., Norville, Z. C., et al. (2020). Simple Behavioral Analysis (SimBA) – an open source toolkit for computer classification of complex social behaviors in experimental animals. http://dx.doi.org/10.1101/2020.04.19.049452, BioRxiv, 2020.04.19.049452. Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641–1646. http://dx.doi.org/10.1016/j.cub. 2011.08.031. Nonaka, S., Majima, K., Aoki, S. C., & Kamitani, Y. (2020). Brain hierarchy score: Which deep neural networks are hierarchically brain-like? http://dx.doi.org/ 10.1101/2020.07.22.216713, BioRxiv, 2020.07.22.216713. Noudoost, B., & Moore, T. (2011). Control of visual cortical signals by prefrontal dopamine. Nature, 474(7351), 372–375. http://dx.doi.org/10.1038/ nature09995. Oh, J., Oh, B.-L., Lee, K.-U., Chae, J.-H., & Yun, K. (2020). Identifying schizophrenia using structural MRI with a deep learning algorithm. Frontiers in Psychiatry, 11, 16. http://dx.doi.org/10.3389/fpsyt.2020.00016. O’Reilly, R., Braver, T., & Cohen, J. D. (1999). A biologically based computational model of working memory. In A. Miyake, & P. Shah (Eds.), Models of working memory: Mechanisms of active maintenance and executive control (pp. 375–411). Cambridge University Press, http://dx.doi.org/10.1017/ cbo9781139174909.014. 611 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 Orrù, G., Pettersson-Yeo, W., Marquand, A. F., Sartori, G., & Mechelli, A. (2012). Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review. Neuroscience & Biobehavioral Reviews, 36(4), 1140–1152. http://dx.doi.org/10.1016/j.neubiorev. 2012.01.004. Parsapoor, M. (2016). Brain emotional learning-based prediction model (for long-term chaotic prediction applications). ArXiv. Pereira, T. D., Aldarondo, D. E., Willmore, L., Kislin, M., Wang, S. S.-H., Murthy, M., et al. (2019). Fast animal pose estimation using deep neural networks. Nature Methods, 16(1), 117–125. http://dx.doi.org/10.1038/s41592-018-0234-5. Petersen, A., Simon, N., & Witten, D. (2017). SCALPEL: Extracting neurons from calcium imaging data. ArXiv. Pfeffer, C. K., Xue, M., He, M., Huang, Z. J., & Scanziani, M. (2013). Inhibition of inhibition in visual cortex: The logic of connections between molecularly distinct interneurons. Nature Neuroscience, 16(8), 1068–1076. http://dx.doi. org/10.1038/nn.3446. Pfeiffer, M., & Pfeil, T. (2018). Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience, 12, 774. http://dx.doi.org/10.3389/ fnins.2018.00774. Plis, S. M., Hjelm, D. R., Salakhutdinov, R., Allen, E. A., Bockholt, H. J., Long, J. D., et al. (2014). Deep learning for neuroimaging: A validation study. Frontiers in Neuroscience, 8, 229. http://dx.doi.org/10.3389/fnins.2014.00229. Pnevmatikakis, E. A. (2019). Analysis pipelines for calcium imaging data. Current Opinion in Neurobiology, 55, 15–21. http://dx.doi.org/10.1016/j.conb.2018.11. 004. Pnevmatikakis, E. A., Soudry, D., Gao, Y., Machado, T. A., Merel, J., Pfau, D., et al. (2016). Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2), 285–299. http://dx.doi.org/10.1016/j.neuron. 2015.11.037. Prut, Y., Vaadia, E., Bergman, H., Haalman, I., Slovin, H., & Abeles, M. (1998). Spatiotemporal structure of cortical activity: Properties and behavioral relevance. Journal of Neurophysiology, 79(6), 2857–2874. http://dx.doi.org/10. 1152/jn.1998.79.6.2857. Pulvermüller, F., Tomasello, R., Henningsen-Schomers, M. R., & Wennekers, T. (2021). Biological constraints on neural network models of cognitive function. Nature Reviews Neuroscience, 22(8), 488–502. http://dx.doi.org/10.1038/ s41583-021-00473-5. Raj, S., & Masood, S. (2020). Analysis and detection of autism spectrum disorder using machine learning techniques. Procedia Computer Science, 167, 994–1004. http://dx.doi.org/10.1016/j.procs.2020.03.399. Rajan, K., Harvey, C. D., & Tank, D. W. (2016). Recurrent network models of sequence generation and memory. Neuron, 90(1), 128–142. http://dx.doi.org/ 10.1016/j.neuron.2016.02.009. Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., et al. (2019). A deep learning framework for neuroscience. Nature Neuroscience, 22(11), 1761–1770. http://dx.doi.org/10.1038/s41593-019-0520- 2. Riesenhuber, M., & Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2(11), 1019–1025. http://dx.doi.org/10.1038/ 14819. Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. http://dx.doi.org/10.1037/h0042519. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. http://dx.doi.org/10. 1038/323533a0. Rumelhart, D. E., & McClelland, J. L. (1986). Parallel distributed processing. Volume 2: Psychological and biological models. MIT Press. Ruppin, E., Reggia, J. A., & Horn, D. (1996). Pathogenesis of schizophrenic delusions and hallucinations: A neural model. Schizophrenia Bulletin, 22(1), 105–121. http://dx.doi.org/10.1093/schbul/22.1.105. Salvatori, T., Song, Y., Lukasiewicz, T., Bogacz, R., & Xu, Z. (2021). Predictive coding can do exact backpropagation on any neural network. ArXiv. Sandberg, A., Tegnér, J., & Lansner, A. (2003). A working memory model based on fast Hebbian learning. Network (Bristol, England), 14(4), 789–802. Scellier, B., & Bengio, Y. (2017). Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in Computational Neuroscience, 11, 24. http://dx.doi.org/10.3389/fncom.2017.00024. Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. http://dx.doi.org/10.1016/j.neunet.2014.09.003. Schoenberg, P. L. A., & David, A. S. (2014). Biofeedback for psychiatric disorders: A systematic review. Applied Psychophysiology and Biofeedback, 39(2), 109–135. http://dx.doi.org/10.1007/s10484-014-9246-9. Schrimpf, M., Kubilius, J., Hong, H., Majaj, N. J., Rajalingham, R., Issa, E. B., et al. (2020). Brain-score: Which artificial neural network for object recognition is most brain-like? http://dx.doi.org/10.1101/407007, BioRxiv, 407007. Shen, G., Horikawa, T., Majima, K., & Kamitani, Y. (2019). Deep image reconstruction from human brain activity. PLoS Computational Biology, 15(1), Article e1006633. http://dx.doi.org/10.1371/journal.pcbi.1006633. Shimamura, A. P. (1995). Memory and the prefrontal cortex. Annals of the New York Academy of Sciences, 769(1), 151–160. http://dx.doi.org/10.1111/j.1749- 6632.1995.tb38136.x. Shmueli, G. (2010). To explain or to predict? Statistical Science, 3(25), 289–310. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489. http://dx.doi.org/10.1038/nature16961. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144. http://dx.doi.org/ 10.1126/science.aar6404. Soleimanitaleb, Z., Keyvanrad, M. A., & Jafari, A. (2019). Object tracking methods: A review. In 2019 9th international conference on computer and knowledge engineering (vol. 00) (pp. 282–288). http://dx.doi.org/10.1109/iccke48569. 2019.8964761. Soltanian-Zadeh, S., Sahingur, K., Blau, S., Gong, Y., & Farsiu, S. (2019). Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning. Proceedings of the National Academy of Sciences, 116(17), Article 201812995. http://dx.doi.org/10.1073/pnas.1812995116. Song, H. F., Yang, G. R., & Wang, X.-J. (2016). Training excitatory-inhibitory recurrent neural networks for cognitive tasks: A simple and flexible framework. PLoS Computational Biology, 12(2), Article e1004792. http://dx.doi.org/ 10.1371/journal.pcbi.1004792. Stevens, J. R. (1992). Abnormal reinnervation as a basis for schizophrenia: A hypothesis. Archives of General Psychiatry, 49(3), 238–243. http://dx.doi.org/ 10.1001/archpsyc.1992.01820030070009. Sturman, O., von Ziegler, L., Schläppi, C., Akyol, F., Privitera, M., Slominski, D., et al. (2020). Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology, 45(11), 1942–1952. http://dx.doi.org/10.1038/s41386-020-0776- y. Sun, J., Cao, R., Zhou, M., Hussain, W., Wang, B., Xue, J., et al. (2021). A hybrid deep neural network for classification of schizophrenia using EEG Data. Scientific Reports, 11(1), 4706. http://dx.doi.org/10.1038/s41598-021-83350- 6. Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544–557. http://dx.doi.org/10.1016/ j.neuron.2009.07.018. Taherkhani, A., Belatreche, A., Li, Y., Cosma, G., Maguire, L. P., & McGinnity, T. M. (2020). A review of learning in biologically plausible spiking neural networks. Neural Networks, 122, 253–272. http://dx.doi.org/10.1016/j.neunet.2019.09. 036. Tang, J., Yuan, F., Shen, X., Wang, Z., Rao, M., He, Y., et al. (2019). Bridging biological and artificial neural networks with emerging neuromorphic devices: Fundamentals, progress, and challenges. Advanced Materials, 31(49), Article 1902761. http://dx.doi.org/10.1002/adma.201902761. Thomas, A. W., Heekeren, H. R., Müller, K.-R., & Samek, W. (2019). Analyzing neuroimaging data through recurrent deep learning models. Frontiers in Neuroscience, 13, 1321. http://dx.doi.org/10.3389/fnins.2019.01321. Thomas, M. S. C., Knowland, V. C., & Karmiloff-Smith, A. (2011). Mechanisms of developmental regression in autism and the broader phenotype: A neural network modeling approach. Psychological Review, 118(4), 637–654. http: //dx.doi.org/10.1037/a0025234. Todorov, M. I., Paetzold, J. C., Schoppe, O., Tetteh, G., Shit, S., Efremov, V., et al. (2020). Machine learning analysis of whole mouse brain vasculature. Nature Methods, 17(4), 442–449. http://dx.doi.org/10.1038/s41592-020-0792-1. Ullman, S. (2019). Using neuroscience to develop artificial intelligence. Science, 363(6428), 692–693. http://dx.doi.org/10.1126/science.aau6595. Uyulan, C., Ergüzel, T. T., Unubol, H., Cebi, M., Sayar, G. H., Asad, M. N., et al. (2020). Major depressive disorder classification based on different convolutional neural network models: Deep learning approach. Clinical EEG and Neuroscience, 52(1), 38–51. http://dx.doi.org/10.1177/1550059420916634. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al. (2017). Attention is all you need. ArXiv. Verschae, R., & Ruiz-del-Solar, J. (2015). Object detection: Current and future directions. Frontiers in Robotics and AI, 2, 29. http://dx.doi.org/10.3389/frobt. 2015.00029. Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., et al. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782), 350–354. http://dx.doi.org/10.1038/s41586-019- 1724-z. von Ziegler, L., Sturman, O., & Bohacek, J. (2021). Big behavior: Challenges and opportunities in a new era of deep behavior profiling. Neuropsychopharmacology, 46(1), 33–44. http://dx.doi.org/10.1038/s41386-020-0751- 7. Wakefield, J. C. (2016). Diagnostic issues and controversies in DSM-5: Return of the false positives problem. Annual Review of Clinical Psychology, 12(1), 1–28. http://dx.doi.org/10.1146/annurev-clinpsy-032814-112800. 612 T. Macpherson, A. Churchland, T. Sejnowski et al. Neural Networks 144 (2021) 603–613 Wasmuht, D. F., Spaak, E., Buschman, T. J., Miller, E. K., & Stokes, M. G. (2018). Intrinsic neuronal dynamics predict distinct functional roles during working memory. Nature Communications, 9(1), 3499. http://dx.doi.org/10. 1038/s41467-018-05961-4. Watanabe, T., & Niki, H. (1985). Hippocampal unit activity and delayed response in the monkey. Brain Research, 325(1–2), 241–254. http://dx.doi.org/10.1016/ 0006-8993(85)90320-8. Weisenburger, S., & Vaziri, A. (2016). A guide to emerging technologies for largescale and whole-brain optical imaging of neuronal activity. Annual Review of Neuroscience, 41(1), 1–22. http://dx.doi.org/10.1146/annurev-neuro-072116- 031458. Whittington, J. C. R., & Bogacz, R. (2019). Theories of error back-propagation in the brain. Trends in Cognitive Sciences, 23(3), 235–250. http://dx.doi.org/10. 1016/j.tics.2018.12.005. Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., et al. (2015). Mapping sub-second structure in mouse behavior. Neuron, 88(6), 1121–1135. http://dx.doi.org/10.1016/j.neuron.2015. 11.031. Wu, Q., McGinnity, T. M., Maguire, L., Cai, J., & Valderrama-Gonzalez, G. D. (2008). Motion detection using spiking neural network model. (pp. 76–83). http://dx.doi.org/10.1007/978-3-540-85984-0_10. Xu, K., Su, H., Zhut, J., Guan, J.-S., & Zhangt, B. (2016). Neuron segmentation based on CNN with semi-supervised regularization. In 2016 IEEE conference on computer vision and pattern recognition workshops (pp. 1324–1332). http: //dx.doi.org/10.1109/cvprw.2016.167. Xue, X., Halassa, M. M., & Chen, Z. S. (2021). Spiking recurrent neural networks represent task-relevant neural sequences in rule-dependent computation. http://dx.doi.org/10.1101/2021.01.21.427464, BioRxiv, 2021.01.21.427464. Yamada, T., Hashimoto, R., Yahata, N., Ichikawa, N., Yoshihara, Y., Okamoto, Y., et al. (2017). Resting-state functional connectivity-based biomarkers and functional MRI-based neurofeedback for psychiatric disorders: A challenge for developing theranostic biomarkers. International Journal of Neuropsychopharmacology, 20(10), http://dx.doi.org/10.1093/ijnp/pyx059, pyx059-. Yamashita, Y., & Tani, J. (2012). Spontaneous prediction error generation in schizophrenia. PLoS One, 7(5), Article e37843. http://dx.doi.org/10.1371/ journal.pone.0037843. Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19(3), 356–365. http: //dx.doi.org/10.1038/nn.4244. Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619–8624. http://dx.doi.org/10.1073/pnas.1403112111. Yan, W., Calhoun, V., Song, M., Cui, Y., Yan, H., Liu, S., et al. (2019). Discriminating schizophrenia using recurrent neural network applied on time courses of multi-site FMRI data. EBioMedicine, 47, 543–552. http://dx.doi.org/10.1016/j. ebiom.2019.08.023. Yang, G. R., Joglekar, M. R., Song, H. F., Newsome, W. T., & Wang, X.-J. (2019). Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience, 22(2), 297–306. http://dx.doi.org/10.1038/s41593- 018-0310-2. Yang, W., & Yuste, R. (2017). In vivo imaging of neural activity. Nature Methods, 14(4), 349–359. http://dx.doi.org/10.1038/nmeth.4230. Yassin, W., Nakatani, H., Zhu, Y., Kojima, M., Owada, K., Kuwabara, H., et al. (2020). Machine-learning classification using neuroimaging data in schizophrenia, autism, ultra-high risk and first-episode psychosis. Translational Psychiatry, 10(1), 278. http://dx.doi.org/10.1038/s41398-020-00965- 5. Yin, B., Corradi, F., & Bohté, . S. M. (2020). Effective and efficient computation with multiple-timescale spiking recurrent neural networks. ArXiv. Yoshihara, Y., Lisi, G., Yahata, N., Fujino, J., Matsumoto, Y., Miyata, J., et al. (2020). Overlapping but asymmetrical relationships between schizophrenia and autism revealed by brain connectivity. Schizophrenia Bulletin, 46(5), 1210–1218. http://dx.doi.org/10.1093/schbul/sbaa021. Zeng, L.-L., Wang, H., Hu, P., Yang, B., Pu, W., Shen, H., et al. (2018). Multi-site diagnostic classification of schizophrenia using discriminant deep learning with functional connectivity MRI. EBioMedicine, 30, 74–85. http://dx.doi.org/ 10.1016/j.ebiom.2018.03.017. Zhang, X.-Q., Jiang, R.-H., Fan, C.-X., Tong, T.-Y., Wang, T., & Huang, P.-C. (2021). Advances in deep learning methods for visual tracking: Literature review and fundamentals. International Journal of Automation and Computing, 18(3), 311–333. http://dx.doi.org/10.1007/s11633-020-1274-8. Zhou, P., Resendez, S. L., Rodriguez-Romaguera, J., Jimenez, J. C., Neufeld, S. Q., Giovannucci, A., et al. (2018). Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. ELife, 7, 3270. http: //dx.doi.org/10.7554/elife.28728. Zhou, Z., Wu, T.-C., Wang, B., Wang, H., Tu, X. M., & Feng, C. (2020). Machine learning methods in psychiatry: A brief introduction. General Psychiatry, 33(1), Article e100171. http://dx.doi.org/10.1136/gpsych-2019-100171. Zhu, G., Jiang, B., Tong, L., Xie, Y., Zaharchuk, G., & Wintermark, M. (2019). Applications of deep learning to neuro-imaging techniques. Frontiers in Neurology, 10(869), http://dx.doi.org/10.3389/fneur.2019.00869. Zipser, D. (1991). Recurrent network model of the neural mechanism of shortterm active memory. Neural Computation, 3(2), 179–193. http://dx.doi.org/ 10.1162/neco.1991.3.2.179.

Connect broadband

How To Compare Machine Learning Algorithms in Python with scikit-learn

 It is important to compare the performance of multiple different machine learning algorithms consistently. In this post you will discover...