Machine learning – Artifex.News https://artifex.news Stay Connected. Stay Informed. Fri, 28 Jun 2024 16:13:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Machine learning – Artifex.News https://artifex.news 32 32 Need to eliminate biases in algorithms as AI on the rise: RBI Governor Shaktikanta Das https://artifex.news/article68345573-ece/ Fri, 28 Jun 2024 16:13:57 +0000 https://artifex.news/article68345573-ece/ Read More “Need to eliminate biases in algorithms as AI on the rise: RBI Governor Shaktikanta Das” »

]]>

Reserve Bank of India Governor Shaktikanta Das. File
| Photo Credit: PTI

RBI Governor Shaktikanta Das on Friday emphasised on the need to eliminate biases in algorithms as the use of artificial intelligence (AI) and machine learning (ML) is on the rise.

Delivering the inaugural address at the 18th Statistics Day Conference organised by the RBI, he said the use of statistics had been ever growing as a preferred tool for drawing inferences in diverse fields and the discipline had moved beyond collection of facts to focusing more on interpretation and drawing inferences, taking into account the level of uncertainty.

The Reserve Bank of India (RBI) has ventured into AI/ML analytics in multiple areas. Under the RBI’s aspirational goals for RBI@100, Mr. Das said the central bank was aiming to develop cutting-edge systems for high frequency and real-time data monitoring and analysis.



Source link

]]>
Scientists build a camera to ‘show’ how animals see moving things https://artifex.news/article67960944-ece/ Mon, 18 Mar 2024 00:30:00 +0000 https://artifex.news/article67960944-ece/ Read More “Scientists build a camera to ‘show’ how animals see moving things” »

]]>

This illustration compares three flowers – summer snowflake (A, B), blue phlox (C, D), and a blue violet (D, E) – in honeybee false colour (left) and human-visible colours (right).
| Photo Credit: Vasas V, et al., 2024, PLOS Biology, CC-BY 4.0

To most people, leaves are green and oranges are orange. But if our pets could speak, they’d disagree.

We know there are many different ways to ‘see’ the world because that’s the diversity we have found in animals. Organisms with the ability to see have two or more eyes that capture light reflected by different surfaces in their surroundings and turn it into visual cues. But while all eyes have this common purpose, the specialised cells that respond to the light, called photoreceptors, are unique to each animal.

For instance, human eyes can only detect wavelengths of light between 380 and 700 nanometres (nm); this is the visible range. Honey bees and many birds on the other hand can also ‘see’ ultraviolet light (10-400 nm).

While the human visual range is relatively limited, it hasn’t abated humans’ curiosity about how animals see the world.

Thankfully we don’t have to imagine too much. Researchers at the University of Sussex and the George Mason University (GMU) in the U.S. have put together a new camera with the ability to view the world like animals do. In a paper published in PLoS Biology, the team has written their device can even reveal what colours different animals see in motion, which hasn’t been possible so far.

Making the invisible visible

Animals use colours to intimidate their predators, entice mates or conceal themselves. Detecting variations in colours is thus essential to an animal’s survival. Animals have evolved to develop highly sensitive photoreceptors that can detect light of ultraviolet and infrared wavelengths; many even notice polarised light as part of their Umwelt – the biological systems that make a specific system of meaning-making and communication possible.

Neither human eyes nor most commercial cameras have been able to tap into this unchartered territory of animal vision. In the new study, exponents of biology, computer vision, and programming came together to create a tool that could record and track the complexity of animal visual signalling.

The tool combined existing multispectral photography techniques with a new camera setup and a beam-splitter (to separate ultraviolet and visible light), all encased in a custom 3D-printed unit. The system recorded videos simultaneously in visible and ultraviolet channels in  natural lighting. They fed the camera output through some code (written in Python) that could convert the visual data to the physical signals produced by photoreceptor cells.

Finally, the researchers modified these signals based on what they already knew about how an animal’s photoreceptors work, and produced videos true to what that animal might see. These used false colours in these videos so that, for example, a particular colour could stand in to show ultraviolet imagery.

In sum, the camera system translated what animals see in visible and non-visible light into colours compatible with the human eye.

The time challenge

You may have already seen false-colour images – like when you saw the Hubble space telescope’s iconic snap of the ‘Pillars of Creation’. The stars and nebulae don’t actually look that resplendent to human eyes. They are coloured that way to show what the telescope saw in, say, infrared or radio wavelengths. Scientists have also used false-colour images to understand how flowers reflect ultraviolet light to influence the behaviour of insects nearby.

But false colours can only stand in for so much. According to the researchers, existing techniques to visualise the colours animals see require object-reflected light to predict how an animal’s photoreceptor would respond or require a series of photographs in wavelengths beyond human vision (with the help of bandpass optical filters). Both scenarios require the subject to be motionless. The new system can visualise free-living organisms in their natural settings, however.

In addition, Pavan Kumar Reddy Katta, a graduate teaching assistant at GMU and one of the study’s authors, said the team wrote a program that could accept both ultraviolet- and visible-light data and spit out complete videos. “We made use of a continuous stream which allowed us to resolve our data at various points of space and time and produce real-time visualisations in animal-vision,” he told this author.

The next big thing in animal vision

Equipped with the new camera, the research team checked what the flower black-eyed Susan (Rudbeckia hirta) looks like to honey bees (Apis mellifera).

“To our eye, the black-eyed Susan appears entirely yellow because in the human-visible range, it reflects primarily long wavelength light,” the team wrote in its paper. “Whereas in the bee false colour image, the distal petals appear magenta because they also reflect ultraviolet, stimulating both the ultraviolet-sensitive photoreceptors … and those sensitive to green light … By contrast, the central portion of the petals does not reflect ultraviolet and therefore appears red.”

According to the paper, the visual mechanisms animals have evolved to communicate and protect themselves could help solve many of our detection problems. For example, the animal-vision video could help people navigate wild landscapes better and without hurting camouflaged animals. It can help farmers spot fruit pests that are not visible to the human eye but are readily visible to animals that have evolved to eat those fruits.

Daniel Hanley, assistant professor at GMU and the study’s corresponding author, said their invention could even transform the way wildlife documentary films are made. The camera system could allow filmmakers and ecologists to record the animal world through a new lens and create new visual experiences. He also said the platform’s striking images could be used to communicate the science of the living world to young audiences.

“We are thinking of creating a science exhibit for children using our setup, flowers, and live animals,” Dr. Hanley said. “Where children can just click a button to experience what a snake might see or a honeybee might see.”

Sanjukta Mondal is a chemist-turned-science-writer with experience in writing popular science articles and scripts for STEM YouTube channels.



Source link

]]>
A (very) basic guide to artificial intelligence https://artifex.news/article67938899-ece/ Tue, 12 Mar 2024 00:30:00 +0000 https://artifex.news/article67938899-ece/ Read More “A (very) basic guide to artificial intelligence” »

]]>

Intelligence is the capacity of living beings to apply what they know to solve problems. ‘Artificial intelligence’ (AI) is intelligence in a machine. There is currently no one definition of AI.

A simple place to begin is with AI’s materiality, as a machine-software combination.

What does the machine do?

A simple example-problem in AI is linear separability. You plot some points on a graph and then find a way to draw a straight line through the graph such that it divides the points into two distinct groups.

Let’s make this problem more abstract. For example, how would a machine differentiate between a cat and a dog?

Say you give the machine 1,000 pictures of cats and 1,000 pictures of dogs, and ask it to separate them. (This task is usually not given to a linear classifier but it illustrates a point.) You also equip the machine with tools — say, a camera and an app that can measure distances of different parts of an image, can analyse depth (using trigonometry), and can assess colours.

The machine can proceed by classifying the cat- and dog-pictures in different ways, say, by shape of the face, shape of the eyes, shape of the paw, body size, size of the tongue, fur colours, etc. Because the machine has the necessary computing power, it can plot these features two at a time on a graph. For example, the x-axis can represent the slope of the face and the y-axis the length of the paw. Or it can plot them three at a time in a 3D graph.

In all these cases, you watch until the machine has found a way to separate the pictures into two groups such that one group is mostly cats and the other is mostly dogs. At this point, you stop the machine.

How hard is decision-making?

Sometimes, it’s very easy to separate a given dataset into two pieces, like with the marbles, where you can make very reliable decisions with just one dimension, or parameter. Sometimes it’s more difficult, like with the cats and dogs, where you may need around a dozen parameters.

Sometimes it is difficult — like asking the computer on a driverless car to determine whether it should apply the brake based on how fast a bird is flying in front of the car. The set of outcomes on one side of the line stand for ‘no’ and the outcomes on the other side stand for ‘yes’ — and solving for this will require hundreds of parameters.

They will also have to account for the context of decision-making. For example, if the person in the car is in a hurry to get to a hospital, is killing the bird okay? Or if the person in the car is not in a hurry, how quickly should the car brake? Etc.

Sometimes it’s just mind-boggling. For example, ChatGPT is able to accept an input question from a user, make ‘sense’ of it, and answer accordingly. This ‘sense’ comes from its training corpus — the billions of sequences of words and sentences scraped from the internet.

In particular, ChatGPT learnt not by classifying words but by predicting the next word in a given sentence. More particularly, large language models (LLMs) like ChatGPT generate the text response without classifying it or relating the question to similar examples. (This is why generative AI is different from a classification model, which is like a sorting machine.)

LLMs are trained on a large corpus of text, where some words are randomly replaced by blanks and the AI is tasked with filling in the blank. And while trying to learn to predict the next word in the text correctly, the AI also learns something about the process that created the text, which is the real world.

ChatGPT is so good because it uses more than 100 billion parameters.

What are some types of machine-learning?

Linear separability is a fairly simple algorithm in machine-learning. There are many algorithms that serve this purpose, and some of them are very complex.

There are three main ways in which ‘machines’ can be classified depending on the way they learn: supervised learning, unsupervised learning, and reinforcement learning.

In supervised learning, the data is labelled (e.g. in a table, the row and column titles are provided and datatypes — numbers, verbs, names, etc. — are pointed out). In unsupervised learning, this information is withheld, forcing the machine to understand how the data can be organised and then solve a problem. Similarly, in reinforcement learning, engineers score the machine’s output as it learns and solves problems on its own, and adjusts itself based on the scores.

The way in which information flows inside the machine is governed by artificial neural networks (ANNs), the software that ‘animates’ the hardware.

What is an artificial neural network?

An ANN comprises computing units, or nodes, connected together in such a way that the whole network learns the way an animal brain does. The nodes mimic neurons and the connections between nodes mimic synapses. Every ANN has two important components: activation functions and weights.

The activation function is an algorithm that runs at a node. Its job is to accept the inputs from other nodes to which it is connected and compute an output. The inputs and outputs are in the form of real numbers.

The weight refers to the ‘importance’ an activation function gives to a particular input. For example, say there are different nodes to estimate the fur colour, tail length, and dental profile in a given photo of a cat or a dog. All these nodes provide their outputs as inputs to a node responsible for separating ‘cat’ from ‘dog’. This way, the nodes can be ‘taught’ to adjust their outcomes by adjusting the relative weights they assign to different inputs.

While nodes are computing units, the ANN itself is not a physical entity. It is mathematical. A node is the ‘site’ of a mathematical function. Put another way, the ANN is like an algorithm that passes information from one activation function to the next in a specific order. The functions modify the information they receive in different ways.

What are transformers?

Transformers are a specialised type of ANN. They are easy to train in parallel, unlike the ANN architectures that preceded it. This is how, for example, ChatGPT could be trained on the entire web.

Here, the ANN is broken up into two parts: the encoder and the decoder. Say an ANN is required to recognise the presence of a cat in a photograph. The encoder accepts the photograph, breaks it up into small pieces (say, 10 x 10 pixels), and encodes the visual information as numerical data (e.g. 0s and 1s). The decoder accepts this data and processes the numbers to reconstruct the information content in the photograph.

The transformer architecture, originally developed at Google and released in 2017, is designed to maximise the amount of attention an ANN devotes to different parts of the input data. It has better performance as a result.

The advent of transformers revolutionised machines’ ability to translate long, complicated sentences.

What are GPUs?

The GPU is the physical processor that ‘runs’ the ANN. It was originally developed to render graphics for video games. It was better at this task than other processors at the time because it was designed to run computing tasks in parallel. It has been widely adopted since as the basic computing unit for ANNs for the same feature.

The company Nvidia has emerged as a technology giant since AI started becoming more popular because of its production of GPUs. Nvidia’s valuation was the fastest in history to go from $1 trillion to $2 trillion (in nine months). Every other company that has been building large AI models is using Nvidia’s GPU-based chips to do so.

In a 2023 analysis, financial services provider Seeking Alpha wrote Nvidia’s overwhelming market share has stoked “resistance” in three ways: competitors are trying to develop and switch to non-GPU hardware; researchers are building smaller learning models (with smaller ANNs) that require less resources than a top-shelf Nvidia chip to run; and developers are building new software to sidestep dependency on specific hardware.

The author is grateful to Viraj Kulkarni for inputs.



Source link

]]>
Reducing ammonia emissions through targeted fertilizer management https://artifex.news/article67804549-ece/ Sat, 03 Feb 2024 15:45:00 +0000 https://artifex.news/article67804549-ece/ Read More “Reducing ammonia emissions through targeted fertilizer management” »

]]>

Based on machine learning, researchers have come up with detailed estimates of ammonia emissions from rice, wheat and maize crops. The dataset enabled a cropland-specific assessment of the potential for emission reductions, which indicates that effective management of fertilizer in the growing of these crops could lower atmospheric ammonia emissions from farming by up to 38%. The paper was published in the journal Nature.

Atmospheric ammonia is a key environmental pollutant that affects ecosystems across the planet, as well as human health. Around 51-60% of anthropogenic ammonia emissions can be traced back to crop cultivation, and about half of these emissions are associated with three main staple crops: rice, wheat and maize. However, quantifying any potential reductions in ammonia emissions related to specific croplands at high resolution is challenging and depends on details such as nitrogen inputs and local emission factors.

Yi Zheng from the Southern University of Science and Technology, Shenzhen, China and others used machine learning to model ammonia output from rice, wheat and maize agriculture worldwide on the basis of variables that include climate, soil characteristics, crop types, irrigation, tillage and fertilization practices. To inform the model, the researchers developed a dataset of ammonia emissions from over 2,700 observations obtained via systematic review of the published literature. Using this model, the researchers estimate that global ammonia emission reached 4.3 teragrams (4.3 billion kilograms) in 2018. The authors calculated that spatially optimizing fertilizer management — as guided by the model — could result in a 38% reduction in ammonia emissions from the three crops. The optimised strategy involves placing enhanced-efficiency fertilizers deeper into the soil using conventional tillage practices during the growing season.

The researchers found that under the fertilizer management scenario rice crops could contribute 47% of the total reduction potential, and maize and wheat could contribute 27% and 26%, respectively. Without any management strategies, the authors calculated that ammonia emissions could rise by between 4.6% to 15.8% by 2100, depending on the level of future greenhouse gas emissions.



Source link

]]>
British School Employs AI Robot As Principal Headteacher For Enhanced Decision-Making https://artifex.news/british-school-employs-ai-robot-as-principal-headteacher-for-enhanced-decision-making-4487727/ Tue, 17 Oct 2023 04:34:39 +0000 https://artifex.news/british-school-employs-ai-robot-as-principal-headteacher-for-enhanced-decision-making-4487727/ Read More “British School Employs AI Robot As Principal Headteacher For Enhanced Decision-Making” »

]]>

Cottesmore is an academic boarding prep school for boys and girls.

Artificial intelligence is taking over many jobs by automating tasks that were once done by humans. This is happening in a variety of industries, including manufacturing, customer service, healthcare, and transportation. In a surprising move, a preparatory school in the United Kingdom has named an AI robot as its “principal headteacher.” Cottesmore School, located in West Sussex, collaborated with an artificial intelligence developer to design Abigail Bailey, the robot, with the purpose of assisting the school’s headmaster.

Tom Rogerson, headmaster of Cottesmore, told The Telegraph that he is using the robot to give him advice on issues ranging from how to support fellow staff members to helping pupils with ADHD and writing school policies. The technology works in a similar way to ChatGPT, the online AI service where users type questions, and they are answered by the chatbot’s algorithms.

Mr Rogerson said the AI principal has been developed to have a wealth of knowledge in machine learning and educational management, with the ability to analyze vast amounts of data.

He told The Telegraph: “Sometimes having someone or something there to help you is a very calming influence.

“It’s nice to think that someone who is unbelievably well trained is there to help you make decisions.

“It doesn’t mean you don’t also seek counsel from humans. Of course you do. It’s just very calming and reassuring knowing that you don’t have to call anybody up, bother someone, or wait around for an answer.”

He added: “Being a school leader, a headmaster, is a very lonely job. Of course we have head teacher’s groups, but just having somebody or something on tap that can help you in this lonely place is very reassuring.”

The Cottesmore school charges fees up to almost 32000 pounds (Rs 32,48,121) a year for UK students.

The school, which has received accolades such as Tatler’s “Prep School of the Year,” is a boarding institution catering to boys and girls between the ages of four and 13.

Waiting for response to load…



Source link

]]>