Online Degrees Blog at New York Tech
Neural Networks 101: Understanding the Basics of This Key AI Technology

Neural Networks 101: Understanding the Basics of This Key AI Technology

Deep learning neural network concept

Artificial intelligence (AI) has been an extremely hot topic the past couple of years, and for good reason. AI has made it possible for human beings to start performing tasks, completing work, and solving problems at a previously impossible rate. But what exactly is AI, and how is it able to accomplish all of these tremendous things?

The heart of AI lies with neural networks. They act as a fundamental component in driving AI systems, making them capable of performing complex tasks. Understanding a neural network's function is essential to comprehending how they led to the introduction of AI, and the subsequent infiltration of AI into so many aspects of daily life.

Keep reading to explore how neural networks work and how they fuel AI powered tools, applications, and systems.1

Definition of Neural Networks

Neural networks, known as artificial neural networks (ANNs), are a method that teaches computers how to process data. They are a subset of machine learning (ML) and act as a series of machine learning algorithms that seek relations in data sets.

Neural networks essentially mimic the way the brain works. They resemble the structures of interconnected neurons, which are nerve cells that send messages throughout the body. This extreme interconnectedness and rapid communication is what makes them so effective in processing information and learning to solve problems.

Artificial neural networks function as building blocks in the same way neurons do for the brain and nervous system. They transmit and process information in interconnected units called artificial neurons. Every neuron processes data using a simple mathematical operation, similar to how biological neurons receive and send electrical signals.2

Components of Neural Networks

Artificial neural networks are made up of three main components:3

  1. An input layer that receives the data
  2. An inner layer that processes the information
  3. An output layer that transmits the result

How Neural Networks Work

The number of inner or hidden layers in a neural network varies depending on the complexity of a problem it needs to solve. Solving a simple addition problem would require only a few layers, while a series of complex math problems would require more than one hidden layer. Neural networks use a feedforward process in which data passes from the input layer, like the top layer of a sandwich, to the output layer, or the other side of a sandwich, to make predictions or classify data.4

Every neuron takes the sum of its inputs and then applies an activation layer to produce an output that gets processed to the next layer. Weighted connections represent the strength of the links between neurons. When training an algorithm to optimize network performance, you adjust those weights and reduce the differences between its predictions and the target values.

Non-linearity refers to non-linear activation functions introduced to the individual nodes of a linear network.5 Activation functions determine the output of a neuron based on the weighted sum of its inputs. They allow the modeling of complex relationships within data. Examples of activation functions include:

  • Sigmoid function, which maps inputs to a range between zero and one in traditional neural networks
  • Rectified linear units (ReLU), which are used in deep learning to return the input for positive values or zero for negative values
  • Hyperbolic tangent (tanh) functions, which map inputs to a range between negative one and one in a neural network

Different Types of Neural Networks

Below is an overview of the most common types of neural networks currently in use.4 This can change as the technology evolves.

Feedforward Neural Networks (FNNs)

FNNs, also called multi-layer perceptrons (MLPs), are characterized by a sequential flow of information that moves through neuron layers without relying on loops or cycles. They’re typically suitable for regression and classification tasks requiring sequential data processing.

Convoluted Neural Networks (CNN)

CNNs work with tasks using images, videos, and other grid-like data. They use convolutional layers to apply filters to input images. Those filters capture patterns and features, so you often see CNNs used in AI applications focused on image recognition, segmentation, and object detection.

Recurrent Neural Networks (RNN)

These neural networks introduce loops into a network architecture to maintain hidden states that persist information through different phases. RNNs process sequential data with a sense of memory.

Once you grasp the basics of how neural networks function, you get a clear picture of their importance to AI applications.

Why Do Neural Networks Matter to AI Applications?

Neural networks are what help AI make intelligent decisions without a lot of human assistance. They learn and model relations between non-linear, complex data. For example, you can set up neural networks to recognize the inputs between sentences with similar meanings but requiring different actions.

  • Where can I buy a new pair of sneakers?
  • Where can I find quality snowshoes?

While the first involves a search for sneakers, the second sentence is looking for snowshoes. Another example might be when a user wants to make an online payment vs. transferring money from one account to another. Neural networks help AI applications understand similarities and differences in requests and take the correct action. None of that happens in a vacuum. You must put in time and effort to train neural networks to function as needed with AI applications.

What's Used to Train Neural Networks for AI Applications?

Labeled data is required to start training neural networks. It consists of input data along with corresponding outputs and labels. Neural networks learn by comparing their predictions to the actual labels. Those networks then adjust weights to reduce prediction errors.

Loss functions quantify the error between a neural network’s predictions and the labels. Examples of standard loss functions include:6

Mean Squared Error (MSE)

Used with regression tasks, MSE calculates the average of the squared difference between predicted and target values.

Mean Absolute Error (MAE)

Also used for regression tasks, MAE calculates the average absolute differences between predicted and target values.


Used for classification tasks, cross-entropy measures the difference between predicted class probabilities and true class labels.

Binary Cross-Entropy (Log) Loss

Binary cross-entropy (log) loss is often used for binary classification tasks requiring an output of zero or one. Log loss measures the differences between predicted probabilities and actual binary labels.

Neural Networks Use Cases

One of the most popular uses of neural networks with AI is building processes to locate and recognize patterns and relationships in data. You see this at work in image and speech recognition applications.

Neural networks revolutionized natural language processing (NLP) by enabling models to understand and generate human language. GPT and BERT are examples of AI applications that use neural networks in that way.

Autonomous systems like self-driving cars and drones use neural networks to make decisions, use perception and control the vehicle. Other AI applications that rely on neural networks include:3

  • Medical imaging machines
  • Algorithms for trading
  • Content recommendation systems
  • Gaming non-playable characters (NPCs)
  • Equipment monitoring
  • Social media content moderation

Expand Your Knowledge of Neural Networks and AI Technology

If you're interested in neural networks and other deep learning techniques, New York Institute of Technoloy online programs can take you on a deeper exploration of technology currently transforming the world. Learn more about the programs, available resources, and our faculty experts who can help you find a career path that suits your interests. Schedule an appointment with an admissions outreach advisor.

New York Institute of Technology has engaged Everspring, a leading provider of education and technology services, to support select aspects of program delivery.