Bedrock Humanised Intelligence
  • Data by Design
  • Our work
  • Knowledge
    • Podcast
    • Articles
    • La Diaspora
    • La Pipa TV
  • People
    • Team
    • Careers
  • LA PIPA
  • Let’s talk
  • Data by Design
  • Our Work
  • Knowledge
    • Podcasts
    • Articles
    • La Diaspora
    • La Pipa TV
  • People
    • Team
    • Careers
  • La Pipa
Let's talk

Author: bedrock

Back to Articles

Abr 7 — 2022

Outliers in data preprocessing: Spotting the odd one out!

Outliers in data preprocessing: Spotting the odd one out!

More than 2.5 quintillion bytes of data were generated daily in 2020 (source: GS Analytics). To put this in perspective, a quintillion is a million million million or, more simply, a 1 followed by 18 zeros. It is therefore not surprising that a significant amount of this data is subject to errors. Data scientists are aware of this and routinely check their databases for values that stand out from the rest. This process referred to as “outlier identification” in the case of numerical variables, has become a standard step in data preprocessing.

The search for outliers

The search for univariate outliers is quite straightforward. For instance, if we are dealing with human heights and most of the individuals´ measurements are expected to range between 150cm and 190cm, then heights such as 1,70cm and 1700cm must be understood to be annotation errors. Aside from such gross outliers, which should definitely be cleaned when performing data preprocessing tasks, there is still room for outliers that are inherent to the type of data we are dealing with. For instance, some people could be 140cm or 200cm tall. This type of outlier is typically identified with rules of thumb such as the absolute value of the z-score being greater than 3. Unless there is an obvious reason (such as an annotation error), these outliers should not be removed/cleaned in general, still, it is important to identify them and monitor their influence on the modelling task to be performed.

Multivariate outliers

A more difficult problem arises when we are dealing with multivariate data. For example, imagine that we are dealing with human heights and weights and that we have obtained the data represented in the scatterplot below. The individual marked in red is not a univariate outlier in either of the two dimensions separately, however, when jointly considering both height and weight this individual clearly stands out from the rest.

A popular technique for the identification of multivariate outliers is based on the use of the Mahalanobis distance, which is just a measure of how far a point x is from the centre of the data. Mathematically speaking, the formula is as follows:

where mu represents the mean vector (i.e., the centre of the data) and Sigma the covariance matrix, both of them typically being estimated from the data by the sample mean vector and the sample covariance matrix.

Interestingly, the Mahalanobis distance may be used for drawing tolerance ellipses of points that are at a certain Mahalanobis distance from the centre of the data, thus allowing us to easily identify outliers. For instance, returning to the example of human height and weight, it can be seen that the individual marked in red is actually the most outlying point when taking into account the graphical shape of our dataset.

In fact, one could understand the Mahalanobis distance as the multivariate alternative to the z-score. More precisely, ‘being at a Mahalanobis distance d from the centre’ is the multivariate equivalent of ‘being d standard deviations away from the mean’ in the univariate setting. Therefore, under certain assumptions, such as the data being obtained from a multivariate Gaussian distribution, it is possible to estimate the proportion of individuals lying inside and outside a tolerance ellipse. In the case above, we are representing a 95% tolerance ellipse, meaning that around 95% of the data points are expected to lie inside the ellipse if the data is obtained from a multivariate Gaussian distribution.

The identification of multivariate outliers becomes even more problematic as the number of dimensions increases because it is no longer possible to represent the data points in a scatterplot. In such a case, we should rely on two/three-dimensional scatterplots for selected subsets of the variables or for new carefully-constructed variables obtained from dimensional reduction techniques. Quite conveniently, the Mahalanobis distance may still be used as a tool for identifying multivariate outliers in higher dimensions, even when it is no longer possible to draw tolerance ellipses. For this purpose, it is common to find graphics such as the one below, where the indices of the individuals on the dataset are plotted against their corresponding Mahalanobis distances. The blue dashed horizontal line represents the same level as that marked by the tolerance ellipse above. It is easy to spot the three individuals lying outside the tolerance ellipse by looking at the three points above the blue dashed horizontal line and, in particular, the individual marked in red is shown again to clearly stand out from the other data points.

As a drawback of this method for the identification of multivariate outliers, some authors have pointed out that the Mahalanobis distance is itself very influenced by the outliers. For instance, imagine that five additional individuals — also marked in red in the scatterplot below — are added to the dataset. The tolerance ellipse (in red) has now been broadened and contains the individual previously considered as the most outlying. To avoid this problem, we may replace the sample mean vector and the sample covariance matrix in the definition of the Mahalanobis distance by other alternatives that are not strongly influenced by the outliers. A popular option is the Minimum Covariance Estimator (MCD) for jointly estimating the mean vector and the covariance matrix, which will identify a tolerance ellipse that is closer to the original ellipse (the blue one) than to the ellipse heavily influenced by the outliers (the red one).

Another potential drawback for the identification of multivariate outliers is the shape of the dataset since the Mahalanobis distance only takes account of linear relationships between variables. More specifically, the Mahalanobis distance should not be used when there is clear evidence that there exist several clusters of individuals in the data or, more generally, if the shape of the dataset is not somehow elliptical. In this case, we may want to tap into different techniques such as “depth-based” and “density-based” outlier detection techniques.

Conclusion

To summarise, in this article we have seen a prominent technique for outlier identification that should be performed as a data preprocessing task. Additionally, data analysts or data scientists may also be interested in reducing the influence of outliers on the resulting model by considering techniques that are less sensitive to the presence of outliers (for such purposes, the reader is directed to classic books on robust statistics). However, the study of outliers should not end there since it is also important to ultimately analyse the influence of the outliers on the performance of the analytical model. More precisely, one must be careful with the so-called influential points, which are outliers that, when deleted from the dataset, noticeably change the resulting model and its outputs. Further analysis of the reasons why these influential points appear in the dataset must be performed, not only by the data professionals but also by experts with vast specific domain knowledge on the nature of the problem.

Back to Articles

Jan 20 — 2022

7 trends that will define transformational programs and data initiatives in 2022

You can argue about how much the pandemic had to do with the increasing pace at which Artificial Intelligence (AI) was adopted throughout 2021, but what you cannot argue with is that Covid has pushed leaders to accelerate research and work in this field. Managing uncertainty for the past two years has been a major reason for our clients to keep a data-driven business model as their top strategic priority to stay relevant and competitive, empowering them to actively and effectively respond to rapidly shifting situations. However, all of us are faced with a myriad of technology solutions and tools of increasing technical complexity. To help navigate this sheer amount of information, I have prepared a brief summary of my own perspective on what lies ahead for the next year. When putting this article together I found summarising the more than 30 conversations I had when recording our Data Stand-up! podcast really helpful. I spoke with successful entrepreneurs, CIOs, CDOs, Lead Data Scientists from all around the world, and all of them brought a great share of perspectives on the question: Where is data going in 2022 when it comes to supporting business strategies. So, what does 2022 have in store for us? Let‘s dive in!

1. Data Lake Houses

Putting it simply, there have been two “traditional” ways to operationalise data analytics at a business level in terms of the underlying infrastructure used and the type of data being fed:
  • Structured datasets and Data Warehouses: This is about retrieving datasets that display a consistent schema (i.e. data from business applications such as CRMs) that is imported into a Data Warehouse storage solution that then feeds Business Intelligence tools. These “Warehousing architectures” particularly struggle with advanced data use cases. For instance, their inability to store unstructured data for machine learning development is a downside that cannot be overlooked. Furthermore, proprietary Data do not match well with some open-source data science and engineering tools like Spark.
  • Unstructured, semi-structured datasets and Data Lakes: Data lakes were designed to store unprocessed data or unstructured data files such as pictures, audio or video that cannot fit as neatly into data warehouses. Retrieving raw data and importing it directly into a Data Lake without any cleansing or pre-processing in between becomes handy when dealing with these files. The majority of data being generated today is unstructured so it is now imperative to use tools that enable processing and storing unstructured sets. Data lake’s drawback is the difficulty in maintaining data quality and governance standards, sometimes becoming “Data Swamps” full of unprocessed information lacking a consistent schema. This makes it difficult to search, find and extract data at will.
The reality is that both scenarios need to “coexist”, integrating and unifying a Data Warehouse and Data Lake becomes a requirement as analytics teams need structured and unstructured data both indexed and stored. Any modern company needs the best of both worlds by building a cost-efficient resilient enterprise ecosystem that flexibly supports its analytical demands. Meaning, any Data Engineer should be able to configure data pipelines and grant retrieval access to Data Scientists regardless of underlying infrastructure in order to perform their downstream analytics job duties. This is the idea and vision behind The“Data Lakehouse”:
A unified architecture that provides the flexibility, cost-efficiency, and ease of use of Data Lakes with the data granularity, consistency, reliability, and durability of data warehouses, enabling subsequent analyses, ML and BI projects.
There are a few providers out there that offer top-notch Data Lakehouse solutions. Databricks seem to be leading the race and is the industry leader as it was the original creator of the Lakehouse architecture (i.e. Delta Lake). Amazon Web Services (AWS) is another winning horse with a Lakehouse architecture (i.e. Lake Formation + AWS Analytics). Snowflake is also a relevant provider of this emerging “hybrid” infrastructure. I predict that the Data Lakehouse architecture will continue to be in the spotlight in 2022 as companies will also focus on Data Engineering even more than previously. There is already a huge demand for data architects and engineers in charge of platforms, pipelines and DevOps.

2. Low-code and No-code AI. Is it really the future?

Data Science is not just a research field anymore and it has been many years since it was validated as a powerful tool that every area of the business wants a piece of. However, the market continues to struggle to keep up with the filling of new openings as talent demand still exceeds supply. Low-code or no-code platforms were and still are one of the promising solutions to turn this around as they empower non-technical business professionals to “act” as Data Scientists. Moreover, these tools present an added benefit: More people across the organisation may begin to understand what can be done with data and, therefore, know better what questions can be realistically asked. Some well-known solutions such as DataRobot, H2O AutoML, BigML or ML Studio allow the development of practical data applications with little to no programming experience but…
Is it realistic for people who haven’t learned how to code to implement functional and safe analytical systems or AI solutions? Yes, but only if these non-technical professionals are guided and supported.
These days you may find a marketing executive building an NLP solution for sentiment analysis or a Hypermarket operations manager building a demand prediction system, but I must share a word of caution based on recent experience. Codeless does not mean maths-less. Background knowledge of the processes and mathematics behind data transformation, feature engineering and algorithms is needed for the correct ideation and implementation of effective solutions. These days you may find a marketing executive building an NLP solution for sentiment analysis or a Hypermarket operations manager building a demand prediction system, but I must share a word of caution based on recent experience. Codeless does not mean maths-less. Background knowledge of the processes and mathematics behind data transformation, feature engineering and algorithms is needed for the correct ideation and implementation of effective solutions. My take here:
These tools´ adoption will continue to grow and low-code solutions will continue to be a relevant trend in 2022. However, the definition of new roles ( QA, Coaches, Evangelists, etc.) surrounding the adoption of these tools will be needed too.
Many have quickly realised that the supervision and guidance of qualified data professionals is critical, more so when explainable and transparent AI is an upcoming legal prerequisite.

3. Augmented and hybrid human workforce

Employees have been understandably concerned about robots taking over during the last few years, especially when Gartner claimed that one in three jobs will be taken by software or robots, as some form of AI, by 2025. It seems common sense that organisations should have highlighted earlier that AI is only aimed to augment our capabilities, providing us with more time for creative and strategic thinking tasks, and not just replacing people. In my view, Machine Learning will now start to really enhance the lives of employees. Boring and repetitive admin tasks will fade into obscurity and soon be long gone. I believe that 2022 will be the year when we begin to see that AI, in the form of digital-co workers, is really welcomed by people at organisations. Whether you choose to call them robots, RPA systems, or digital co-workers, AI will allow us all to make quicker decisions, automate processes and process vast amounts of information at scale much faster.
In order to remain competitive, businesses of all kinds will have to start designing a hybrid workforce model where humans and “digital co-workers” work hand in hand.
We should still be realistic about expecting automation to fully replace some jobs, but I do hope that reinvented jobs and new positions will balance out all the jobs lost. Cultural adoption barriers still pose a major challenge, but despite popular pessimistic beliefs and potential drawbacks, the redefined augmented workforce is one of the key trends to keep an eye on during 2022 and beyond.

4. Efficiency vs complexity

Whilst a huge chunk of the research efforts and R&D data initiatives by FANGs are directed towards pushing the boundaries of narrow AI in the pursuit of General AI, developing, training and running complex models in this pursuit has inevitably had a negative collateral impact on the environment. Due to the computational power required to fuel some hyper-parameterized models, it is no surprise that data centres are beginning to represent a significant chunk of global CO2 emissions. For reference, back in 2018, the number of parameters in the largest AI models was 94 million parameters and this grew to 1.6 trillion in 2021 as these larger players pushed the boundaries of complexity. Today, these trillions of parameters are language and image or vision-based. Models such as GPT-3 can comprehend natural language, but also require a lot of computational power to function. This has motivated leading organisations to explore how they can effectively reduce their Machine Learning carbon footprint.
Big players have started to look at ways of developing efficient models and this has had an impact in the Data Science community as teams seem to now be looking for simpler models that perform as well as complex ones for solving specific problems.
A relatively simple Bayesian model may sometimes perform as well as a 3D-CNN while using significantly less data and computational power. In this context, “model efficiency” will be another key aspect of modern data science.

5. Multi-purpose modelling

It takes a lot of data sets, hard-to-get talent, costly computing resources and valuable time to ideate, develop and train AI models. Data teams are very familiar with the effort that it takes to deploy a model that works properly and accurately, hence Data Scientists understand that every aspect of the development work should be reapplied if possible in other modelling exercises. We have seen this happening in many industries and this trend seems to be pointing in the direction of training capable general-purpose models that are able to handle very diverse data sets and therefore solve thousands of different tasks. This is something that may be incremental over the next few years.
These multimodal models could be thought of and designed from the beginning to be highly efficient reapplicable tools.
These AI models would be combining many ideas that may have been pursued independently in the past. For instance, Google is already following this vision in a next-generation kind of data architecture and umbrella that they have named Pathways. You should not be surprised if you read about substantial progress in the research field of multi-purpose modelling in the next few months.

6. People Analytics

Dissatisfaction with job conditions, reassessments of work-life balance, and lifestyle alterations due to the hardships of the pandemic led to the Great Resignation, an informal name for the widespread phenomenon of hundreds of thousands of workers leaving their jobs during the COVID-19 era. Also called the Big Quit, it has often been mentioned when referring to the US workforce, but this trend is now international. All pandemic effects are still unpredictable but organisations have been forced to wake up and now seem committed to understanding their people. Companies are looking for effective ways to gain this comprehension of their employees. Many have come to the realisation that People Analytics could be the answer. In my view, there are two main drivers that have encouraged leaders to consider People Analytics:
  • The KPIs that define business value have changed during the past years. In the past, it was related to tangible stuff such as warehouse stock, money in the bank, owned real estate, etc., but value nowadays is highly tied to having a talented workforce that can be an industry reference and that nurtures innovation. This relates to the previous trend about workforce changes where creativity will become more and more important hence the need to have a motivated and innovative team that thinks outside the box.
  • Data Technology and AI now form the backbone of the strategic decision-making toolkit at most advanced companies.
People analytics has become a data-driven tool that allows businesses to measure and track their workforce behaviour in relation to their strategy.
People analytics is built upon the collection of individual talent data and the subsequent analysis of the same, allowing companies to comprehend the evolving workplace, but also surfacing insights that drive customer behaviour and engagement. Moreover, it assists the management and HR units to manage and steer the holistic people strategy by prescribing future actions. These actions may be related, but not limited to, improving talent-related decisions, improving workforce processes and promoting positive employee experience. People Analytics was only adopted by large enterprises with big budgets in the past and it has not been until recently that mid-size organisations joined in too. As of 2020, more than 70 percent of organisations were investing in people analytics solutions to integrate resulting insights into their decision-making. I am pretty certain that this percentage will increase significantly during the next months

7. Data marketplaces

If data is now understood as the new oil and the most valuable asset for any company, data marketplaces may become a mainstream way in 2022 when it comes to exchange and trade information. Even though some companies in specific sectors still jealously guard their data, others have spotted an opportunity in exchanging information. Some platforms such as Snowflake’s Data Marketplace allow businesses to become data providers, enabling them to easily share and monetise large data sets. For enterprises that generate large datasets or highly unique ones as part of their day-to-day activities, some companies may find that it is worthwhile to explore this route as a new way of generating additional revenue. In contrast, a few years back, it was common that medium and large businesses would fully outsource data analytics projects to an IT provider that would eventually use the 3rd parties data without consent. Now that everyone has understood that data is the most valuable asset, data will be exchanged and shared at will, but always with the expectation of something in return. Nevertheless, companies that aim to capitalise on this opportunity need to ideate a robust strategy for it by carefully assessing all legal and privacy implications. Similarly, they will have to build processes that automate the required data transformations so that data exports comply with existing regulations
The rise in AI applications will contribute to the widespread adoption of this trend. Complex models require vast amounts of data to be fed and many will also use these exchanges as a way of developing and training models.
2022 might be the year when the well-known statement by the Economist from 2017 about data being the new Oil will come closer to business reality with the first ‘commodity exchanges’.

Conclusions

There are almost certainly more than these 7 trends, but I have chosen to focus on the high-level ones in order to provide a rough prediction of what may shape corporate strategies and business plans around the world. Now to recap the 7 trends that we have discussed and that you could expect to see in the Analytics and the Artificial Intelligence space in 2022:
  • Data Lakehouses as a hybrid architecture that allows efficient processing and analysis of structured, semi-structured and unstructured data sets.
  • Low and no-code data solutions will continue to be a way to democratise Data Science, but new supervisor roles may appear around them.
  • AI-enhanced workforce will continue to rise where analytical mechanisms and automation are the norm
  • Model efficiency and simplicity will be a defining metric more than ever.
  • Data Science teams will demonstrate significant interest in >Multi-purpose AI as a way to efficiently reutilise pieces of modelling work from previous tested developments.
  • People Analytics will be one of the most sought after data initiatives that can realistically support business goals
  • Data Marketplaces and data exchanges will present a new revenue opportunity for businesses that generate large or unique data sets
  So what data trends are here to stay and what is coming next? Truthfully, no one could tell you and this is just my opinion so we will have to play along and see!
Back to Articles

Jan 11 — 2021

TRANSFORMERS: multi-purpose AI models in disguise (Part 2)

TRANSFORMERS: multi-purpose AI models in disguise

In the first part of this article, we took a look at the Transformer model and its main use cases related to NLP, which have hopefully broadened your understanding of the topic at hand. If you have not read it yet, I suggest you give it a brief glance first since it will help you understand its current standing.

In the second part of this article, we will present novel model architectures and research employing Transformers in several fields unrelated to NLP, as well as showing some code examples of the capabilities of these remarkable new approaches.

 

APPLYING THE TRANSFORMER TO OTHER AI TASKS

As previously mentioned, the Transformer architecture provides a suitable framework designed to take advantage of long-term relationships between words. This allows the model to find patterns and meanings in the sentences, and makes it suited for many tasks in NLP. The most common ones are:

 

  • Text classification into categories, such as obtaining the sentiment of a text
  • Question answering, where the model can extract information from a text when prompted to do so
  • Text generation, such as GPT-3
  • Translation; Google Translate already employs this technology
  • Summarization of a text into few words or sentences

 

After the success of the Transformer model applied to NLP tasks, people began to wonder: If it can find long-term relationships in the data and be trained efficiently, then could it be as efficient in other tasks besides  ? This is the start of the current movement of research, where this model is used as the backbone for many algorithms in AI and machine learning previously dominated by other techniques. Some amazing contributions in other AI fields are:

 

Lip reading and text transcription:

A recurring problem in society is related to text transcription, especially for hearing impaired people. The current advances are divided into two groups: using an audio track and transcribing it, or directly interpreting the words from the person’s lip movements. The latter problem is a lot harder to resolve since many factors are involved.

 

Potentially, this solution could provide help in situations where recording audio is impossible, such as in noisy areas. In this regard, most researchers use CNN or LSTM as the main model interpreting this as a pure Computer Vision task. Recently, some studies such as (https://ieeexplore.ieee.org/document/9172849) have been published where Transformer-based solutions to this problem are presented, providing better results than the current state-of-the-art solutions.

Traffic route prediction at sea:

Figure x: maritime traffic density map as of 2015. (https://www.researchgate.net/figure/2015-worldwide-maritime-traffic-density-map-The-density-is-evaluated-as-the-number-of_fig1_317201419)

 

One of the main focuses of AI models is the prediction of routes, either for individual people or traffic. However, in this regard, the models employed are mostly trained on land traffic since it is easier to obtain data and model it. Regarding sea traffic, the scarcity and dependence on external factors such as weather or sea currents makes it more difficult to provide accurate predictions for the next few hours.

Most of the employed models rely on LSTM or CNN for the same previous reasons, but these models struggle when dealing with long-term predictions and they don’t take into account the specific characteristics of data obtained at sea. A recent study (https://arxiv.org/abs/2109.03958) presents a novel algorithm that takes into account the data’s nuances and provides a vessel trajectory prediction using a Transformer model. The accuracy of the predictions is well above the alternative models available, where long-term predictions are mandatory.

 

Object detection:

This is a subset of Computer Vision and one of the most common AI tasks. In this task, the model can detect certain objects in an image or video and draw a box around them; some common examples are your phone’s face recognition functionality when you take a picture or unlock it, or CCTV detection of license plates.

In this regard, the models that have been employed in the past are mostly based on CNN since these excel at finding relationships in images; the most common ones being SSD and Faster R-CNN. As a result, most algorithms currently used in these tasks have some variation of this model architecture.

However, as was the case for the other tasks, the Transformer architecture has also been experimented with for finding patterns in images. This has lead to several approaches where CNN and Transformer are used jointly, like Facebook’s DETR (https://arxiv.org/abs/2005.12872), or purely Transformer-based architectures like Vision Transformer (https://arxiv.org/abs/2010.11929). The most impactful research in the past few months has been the novel approach of shifted windows in the Swin Transformer (https://arxiv.org/abs/2103.14030), achieving cutting edge results on a number of categories involving image analysis.

LEARN BY SEEING: OBJECT DETECTION WITH DETR

For most of these models, the code and training data is publicly available and open-sourced, which eases their use for inference and fine-tuning. As an example, we will show below how to load and use the DETR model for a specific image.

First, install the dependencies (Transformer, timm) and load an image of a park using its URL:

Figure x: image of pedestrians in a park.

# Install dependencies
!pip install -q transformers
!pip install -q timm

# Load the needed libraries to load images
from PIL import Image
import requests

# In our case, we selected an image of a park
url = ‘https://www.burnaby.ca/sites/default/files/acquiadam/2021-06/Parks-Fraser-Foreshore.jpg’im = Image.open(requests.get(url, stream=True).raw)

# Show the image
im

Then, we apply the feature extractor to resize and normalize the image so the model can interpret it correctly. This will use the simplest DETR model, with the ResNet-50 backbone:

from transformers import DetrFeatureExtractor

feature_extractor = DetrFeatureExtractor.from_pretrained(“facebook/detr-resnet-50”)

encoding = feature_extractor(im, return_tensors=”pt”)

encoding.keys()

Next, load the pre-trained model and pass the image through:

from transformers import DetrForObjectDetection

model = DetrForObjectDetection.from_pretrained(“facebook/detr-resnet-50”)

outputs = model(**encoding)

And that’s it! Now we only have to interpret the results and represent the detected objects with some boxes:

import matplotlib.pyplot as plt

# colors for visualization

COLORS = [[0.000, 0.447, 0.741], [0.850, 0.325, 0.098], [0.929, 0.694, 0.125],[0.494, 0.184, 0.556], [0.466, 0.674, 0.188], [0.301, 0.745, 0.933]]

# Define an auxiliary plotting function
def plot_results(pil_img, prob, boxes):
plt.figure(figsize=(16,10))
plt.imshow(pil_img)

ax = plt.gca()
colors = COLORS * 100

for p, (xmin, ymin, xmax, ymax), c in zip(prob, boxes.tolist(), colors):

ax.add_patch(plt.Rectangle((xmin, ymin), xmax – xmin, ymax -ymin, fill=False, color=c, linewidth=3))

cl = p.argmax()
text = f'{model.config.id2label[cl.item()]}: {p[cl]:0.2f}’
ax.text(xmin, ymin, text, fontsize=15,
bbox=dict(facecolor=’yellow’, alpha=0.5))
plt.axis(‘off’)
plt.show()
import torch

# keep only predictions of queries with 0.9+ confidence (excluding no-object class)
probas = outputs.logits.softmax(-1)[0, :, :-1]
keep = probas.max(-1).values > 0.9

# rescale bounding boxestarget_sizes = torch.tensor(im.size[::-1]).unsqueeze(0)postprocessed_outputs = feature_extractor.post_process(outputs, target_sizes)bboxes_scaled = postprocessed_outputs[0][‘boxes’][keep]# Show the detection results
plot_results(im, probas[keep], bboxes_scaled)

Figure x: image of pedestrians in a park.

The accuracy of these models is remarkable! Even smaller objects, which are harder to detect for the usual neural networks, are identified correctly. Thank you @NielsRogge (https://github.com/NielsRogge) for the awesome implementation (https://github.com/NielsRogge/Transformers-Tutorials) of these models in the Transformers library!

These examples are just the tip of the iceberg of this research movement. The high flexibility of this architecture and the numerous advantages provided are well suited for a number of AI tasks, and multiple advances are being made on a daily basis on multiple fronts. Recently, Facebook AI published a new paper presenting the scalability of these models for CV tasks that has stirred the community quite a bit; you can also check it out here (https://medium.com/syncedreview/a-leap-forward-in-computer-vision-facebook-ai-says-masked-autoencoders-are-scalable-vision-32c08fadd41f).

Will this be the future of all AI models? Is the Transformer the best solution for all tasks, or will it be resigned to its NLP applications? One thing is for sure: for the time being, the Transformer is here to stay!

Back to Articles

Jan 10 — 2021

TRANSFORMERS: multi-purpose AI models in disguise (Part 1)

TRANSFORMERS: multi-purpose AI models in disguise

Novel applications of this powerful architecture set the bar for future AI advances.

If you have dug deep into machine learning algorithms, you will probably have heard of terms such as neural networks or natural language processing (NLP). Regarding the latter, a powerful model architecture has appeared in the last few years that has disrupted the text mining industry: The Transformer. This model has altered the way researchers focus on analysing texts, introducing a novel analysis that has improved the models used previously. In the NLP field, it has become the game-changer mechanism and it is the main focus of research around the world. This has brought the model wide recognition, especially through developments such as OpenAI’s GPT-3 model for the generation of text.

Moreover, it has also been concluded that the architecture of Transformers is highly adaptable, hence applicable to tasks that may seem totally unrelated to each other. These applications could drive the development of new machine learning algorithms that rely on this technology.

The goal of this article is to present the Transformer in this new light, showing common applications and solutions that employ this model, but also remarking on the new and novel uses of this architecture that take into account its many advantages and high versatility.

So, a brief introduction to the Transformer, its beginnings and the most common uses will be presented next. In the second part of this article, we will delve deeper into the new advances being made by the research community, presenting some exciting new use cases and code examples along the way.

It should be noted that AI solutions sometimes lack the responsibility and rigour  required when practising Data Science. The undesired effect is that models can retain the inherent bias of the data sets used to train them, and this can lead to fiascos such as Google’s Photos app. (https://www.bbc.com/news/technology-33347866). I recommend you check out my colleague’s Jesús Templado article on responsible AI and some hands-on criteria to follow when ideating, training or fine-tuning these models.

(https://medium.com/bedrockdbd/part-i-why-is-responsible-ai-a-hot-topic-these-days-da037dbee705).

 
 

TRANSFORMER: APPEARANCE & RESEARCH

NLP is one of the cornerstones of Data Science, and it is involved in most of our daily routines: web search engines, online translations or social networks are just some examples where AI algorithms are applied in the understanding of textual data. Until 2017, most research in this field was focused on developing better models based on recurrent and convolutional neural networks. These models were the highest performers in terms of accuracy and explainability at the time, albeit at the cost of enormous processing power and long training times. This meant the focus of the whole research community was on how to make these models perform better, or how to reduce the machine processing costs. However, a bottleneck was quickly being reached in terms of computational power, and novel ways of analysing text were needed more than ever.

In December 2017, the Transformer model architecture was proposed by Google Brain and Google Research members in the paper Attention is all you need (https://arxiv.org/abs/1706.03762), providing a new approach to NLP tasks through self-attention technology. This architecture completely outperformed previous models, both in terms of accuracy and training time, and quickly became the state-of-the-  architecture for these applications.

One question may come to your mind: How does a Transformer work? How and why is it better? Although we will avoid highly technical explanations, a basic grasp of the fundamentals for each model is needed to understand its many advantages.

Figure x: schema of a neural network. (https://www.w3schools.com/ai/ai_neural_networks.asp)

Neural networks are connections of nodes that represent relationships between data. They consist of input nodes where data is introduced, intermediate layers where it is processed, and output nodes where the results are obtained. Each of these nodes performs an operation on the data (specifically a regression) that affects the final result.

Figure 2: Graphical comparison between a neural network and a RNN. The loop provides the time dimension to the model.

Recurrent neural networks or RNN also take into account the time dimension of the data, where the outcome is influenced by the previous value. This allows the previous state of the data to be kept and sent into the next value. A variation of the RNN named LSTM or long short-term memory also takes into account multiple points, so the result avoids short-term memory issues with the model that the RNN usually presents.

Figure 3: schematic view of a CNN. Feature learning involves the training process, while classification is the model output.

Convolutional neural networks or CNN apply a mathematical transformation called convolution to the data over a sliding window; this essentially looks at small sections of the data to understand its overall structure, finding patterns or features. The architecture is especially useful for Computer Vision applications, where objects are detected after looking at pieces of each picture.

Recurrence is the main advantage of these models and makes them particularly suited for Computer Vision applications, but it becomes a burden when dealing with text analysis and NLP. The computational power increase when dealing with more complex word relationships and context quickly became a limiting factor for the direct application of these models.

 

The advantage of the Transformer is replacing recurrence for Attention. Attention in this context is a relation mechanism that works “word-to-word”, computing the relationship of each word with the rest, including itself. Since this m  components, the computational cost needed is lower than recurrence methods.

In the original Transformer architecture, this mechanism is actually a multi-headed attention that runs these operations in parallel to both speed the calculations, as well as to learn different interpretations for the same sentence. Although other factors are involved, this fact is the main reason why the Transformer takes less time to be trained and produces better results than its counterparts, and the reason why it is the predominant algorithm in NLP.

If you want to learn more about the original Transformer and its most famous variants, I suggest you take a look at Transformers for Natural Language Processing by Denis Rothman; it includes a hands-on explanation and coding lines for each step performed by the model, which helps to understand its inner workings.

Another great thing about the Transformer research community is the willingness to share and spread knowledge. The online community HuggingFace provides a model repository, a Python library and plenty of documentation to use and train new models based on the available frameworks developed by researchers. They also provide a course for those interested in learning about their platform, so this should be the first stop for you, as an interested reader, if you aim to learn more about the current state-of-the-art models!

Using these models is also very easy with the help of their library, in just a few lines of code we can use pre-trained models for different tasks. One of those is the use of over 1000 translation models developed by the University of Helsinki:

# Import the libraries
from transformers import MarianMTModel, MarianTokenizer
import torch

# Load a pretrained “English to Spanish” model
tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-es”)

model = MarianMTModel.from_pretrained(“Helsinki-NLP/opus-mt-en-es”)

# Input a sentence
input = tokenizer(“Transformers are a really cool tool for multiple NLP tasks, but they can do so much more!!”, return_tensors = ‘pt’, padding = True)

# Print the results
print(tokenizer.batch_decode(model.generate(**input), skip_special_tokens=True)[0])

The output is the sentence: Los transformadores son una herramienta realmente genial para múltiples tareas NLP, pero pueden hacer mucho más!!

Our team at Bedrock has been able to leverage these models to deliver powerful business solutions to People Analytics companies, further reinforcing their utility in the professional environment!

Stay tuned for the next part of this article, where we will present cutting-edge uses of the Transformer in other areas of application of AI, where previously other models reigned supreme.

Back to Articles

Aug 19 — 2021

The opportunity to apply responsible AI (Part 1): Guidelines, Data Science tools, legal initiatives, and tips.

Intro

Dramatic increases in computing power have led to a surge of Artificial Intelligence applications with immense potential in industries as diverse as health, logistics, energy, travel and sports. As corporations continue to operationalise Artificial Intelligence (AI), new applications present risks and stakeholders are increasingly concerned about the trust, transparency and fairness of algorithms. The ability to explain the behaviour of each analytical model and its decision-making pattern, while avoiding any potential biases, are now key aspects when it comes to assessing the effectiveness of AI-powered systems. For reference, bias is understood as the prejudice hidden in the dataset used to design, develop and train algorithms, which can eventually result in unfair predictions, inaccurate outcomes, discrimination and other similar consequences. Computer systems cannot validate data on their own, but are empowered to confirm decisions and here lies the beginning of the problem. Traditional scientists understand the importance of context in the validation of curated data sets. However, despite our advances in AI, the one thing we cannot program a computer to do is to understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.
“A computer cannot understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.”

Historical episodes of failed algorithmia and black boxes

Since the effectiveness of AI is now measured by the creators´ ability to explain the algorithm’s output and decision-making pattern, “Black boxes” that offer little discernible insight into how outcomes are reached are not acceptable anymore. Some historical episodes that brought us all here have demonstrated how critical it is to look into the inner workings of AI.
  • Sexist Headhunting: We need to go back to 2014 to understand where all this public awareness on Responsible AI began. Back then, a group of Scottish Amazon engineers developed an AI algorithm to improve headhunting, but one year later that team realised that its creation was biased in favour of men. The root cause was that their Machine Learning models were trained to scout candidates by finding terms that were fairly common in the resumés of past successful job applicants, and because of the industry´s gender imbalance, the majority of historical hires tended to be male. In this particular case, the algorithm taught itself sexism, wrongly learning that male job seekers were better suited for newly opened positions.
  • Racist facial recognition: Alphabet, widely known for its search engine company Google, is one of the most powerful companies on earth, but also came into the spotlight in May 2015.

Mr Alcine tweeted Google about the fact its app had misclassified his photo.

The brand was under fire after its Photo App mislabelled a user´s picture. Jacky Alcine, a black Web developer, tweeted about the offensive incorrect tag, attaching the picture of himself and a friend who had both been labelled as “gorillas” . This event quickly went viral.

 

  • Unfair decision-making in Court: In July 2016, the Wisconsin Supreme Court ruled that AI-calculated risk scores can be considered by judges during sentencing. COMPAS, a system built for augmented decision-making, is based on a complex regression model that tries to predict whether or not a perpetrator is likely to reoffend. The model predicted double the number of false positives for reoffending for African American ethnicities than for Caucasian ethnicities, most likely due to the historical data used to train the model. If the model had been well adjusted at the beginning, it could have worked to reduce unfair incarceration of African Americans rather than increasing it. Also in 2016, an investigation run by ProPublica found that there were some other algorithms used in US courts that tended to incorrectly dispense harsher penalties to black defendants than white ones based on predictions provided by ML models. These models were used to score the likelihood of these same people committing future felonies. Results from these risk assessments are provided to judges in the form of predictive scores during the criminal sentencing phase to make decisions about who is set free at each stage of the justice system, when assigning bail amounts or when taking fundamental decisions about imprisonment or freedom.
  • Apple´s Credit Card. Launched in August 2019, this product quickly ran into problems as users noticed that it seemed to offer lower credit to women. Even more astonishing was that no one from Apple was able to detail why the algorithm was providing this output. Investigations showed that the algorithm did not even use gender as an input, so how could it be discriminating without knowing which users were women and which were men? It is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. A “gender-blinded” algorithm may be biased against women because it may be drawing data inputs that originally correlated with gender. Moreover, “forcing” blindness to a critical variable such as gender only makes it more difficult to identify and prevent biases on those variables.
  • Most recently, mainly around 2020, AI-enhanced video surveillance has raised some of the same issues that we have just read about such as a lack of transparency, paired with the potential to worsen existing racial disparities. Technology enables society to monitor and “police” people in real time, making predictions about individuals based on their movements, emotions, skin colour, clothing, voice, and other parameters. However, if this technology is not tweaked to perfection, false or inaccurate analytics can lead to people being falsely identified, incorrectly perceived as a threat and therefore hassled, blacklisted, or even sent to jail. This example became particularly relevant during the turmoil caused by the Black Lives Matter riots and the largest tech firms quickly took action: IBM ended all facial recognition programs to focus on racial equity in policing and law enforcement and Amazon suspended active contracts for a year to reassess the usage and accuracy of their biometric technology to better govern the ethical use of their facial recognition systems.

All these are examples of what should never happen. Humans can certainly benefit from AI, but we need to pay attention to all the implications around the advancements of technology.

Transparency vs effective decision-making: The appropiate trade-off

For high volume, relatively “benign” decision-making applications, such as a TV series recommendation in an Over-The-Top streaming platform, a “black box” model may be seem valid. For critical decision-making models that relate to mortgages, work requests or a trial resolution, black boxes are not an acceptable option.

After reading in the previous 5 examples where AI is ineffectively used to support decisions on who gets a job interview, who is granted parole, and even for making life-or-death decisions, it is clear that there’s a growing need to ensure that interpretability, explainability and transparency aspects are addressed thoroughly. This being said, “Failed algorithmia” does not imply that humans should not strive to automate or augment their intelligence and decision-making, but that it must be done carefully by following clever and strict development guidelines.

AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too. AI systems should be deemed trustworthy, relate to human empowerment, technical robustness, accountability, safety, privacy, governance, transparency, diversity, fairness, non-discrimination and societal and enviromental well-being.

“AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too.”

This responsibility also applies to C-level leaders and top executives. Global organisations aren’t leading by example yet and still show no willingness or need to expose their models’ reasoning or to establish boundaries for algorithmic bias . All sorts of mathematical models are still being used by tech companies that aren’t transparent enough about how they operate, probably because even those data and AI specialists who know their algorithms are at a risk of bias are still keen to achieve their end goal rather than taking it out.

So, what can be done about all this?

There are some data science tools, best practices, and tech tips that we follow and use at Bedrock.

I will be talking about all this in the second part of this article as well as about the need for guidelines and legal boundaries in the Data Science & AI field.

Back to Articles

Aug 19 — 2021

The opportunity to apply responsible AI (Part 2): Guidelines, Data Science tools, legal initiatives, and tips.

Intro

In the first part of this article we discussed the potential harm and risks of some Artificial Intelligence applications that have demonstrated immense potential across many industries. We concluded that the ability to explain each algorithm’s behaviour and its decision-making pattern is now key when it comes to assessing the effectiveness of AI-powered systems.

In this second part we will be providing some tips, tools and techniques to tackle this challenge. Likewise, we will be commenting on promising initiatives that are happening in the EU and worldwide around responsible AI. Lastly, we will comment on how responsible AI is an opportunity rather than a burden for organisations.

 

Technical guidelines and best practices

As professionals that operate in this field and that can be held accountable for what we develop, we should always ask ourselves two key questions:

  1. What does it take for this algorithm to work?
  2. How could this algorithm fail, and for whom?

Moreover, those developing the algorithms should ensure the data used to train the model is bias-free, and not leaking any of their own biases either. Here are a couple of tips to minimise bias:

  • Any datasets used must represent the ideal state and not the current one, as randomly sampled data may have biases since we live in an unfair way. Therefore, we must proactively ensure that data used represents everyone equally.
  • The evaluation phase should include a thorough “testing stage” by social groups, filtering these groups by gender, age, ethnicity, income, etc. when population samples are included in the development of the model or when the outcome may affect people.

 

What tools Data Scientists have

There are tools and techniques that professionals from our field use when they need to explain complex ML models.

  • SHAP (SHapley Additive exPlanation): Its technical definition is based on the Shapley value, which is the average marginal contribution of a feature value over all possible coalitions. In plain English: It works by considering all possible predictions by using all possible combinations of inputs and by breaking down the final prediction into the contribution of each attribute.
  • IBM’s AIX360 or AI Fairness 360: An open-source library that provides one of the most complete stacks to simplify the interpretability of machine learning programs and allows the sharing of the reasoning of models on different dimensions of explanations along with standard explainability metrics. It was developed by IBM Research to examine, report, and mitigate discrimination across the full AI application lifecycle. It is likely that we will see some of the ideas behind this toolkit being incorporated into mainstream deep learning frameworks and platforms.
  • What-IF-TOOL: A platform to visually probe the behaviour of trained machine learning models with minimal coding requirements.
  • DEON: A relatively simple ethics checklist for responsible data science.
  • Model Cards: Proposed by Google Research provides confirmation that the intent of a given model matches its original use case. Model Cards can help stakeholders to understand conditions under which the analytical model is safe and also safe to implement.

 

The AI greenfield requires strict boundaries

AI represents a huge opportunity for society and corporations, but the modelling processes should be regulated to ensure that new applications and analytical mechanisms always ease and improve everyone’s life. There is not any legal framework that helps to tackle this major issue, that sets boundaries and/or that provides bespoke guidelines. Likewise, there is not any international consensus that allows consistent ruling, audit or review of what is right and wrong in AI. In fact there is not even national consensus within countries.

Specific frameworks such as The Illinois’ Biometric Information Privacy Act (BIPA) in the US are a good start. The BIPA has been a necessary pain for tech giants as it forbids the annotation of biometric data like facial recognition images, iris scans or fingerprints without explicit consent.

There are ambitious initiatives such as OdiseIA that shed some light on what to do across industries and aim to build a plan to measure the social and ethical impact of AI. But this is not nearly enough because of the immediate need of international institutions to establish global consistency. If a predictive model recommends rejecting a mortgage, can the responsible data science and engineering team detail the logical process and explain to a regulator why it was rejected? Can the leading data scientist prove that the model is reliable within a given acceptable range of fairness? Can they prove that the algorithm is not biased?

The AI development process must be somehow regulated, establishing global best-practices as well as a mandatory legal framework around this science. Regulating the modelling process can mean several things: from hiring an internal compliance team that supports data and AI specialists to outsourcing some sort of audit for every algorithm created or implemented.

AI could be regulated in the same way The European Medicines Agency (EMA) in the EU follows specific protocols to ensure the safety, efficacy and adversarial effects for drugs.

 

Emerging legal initiatives: Europe leading the way

On 8th April 2019 the EU High Level Expert Group on Artificial Intelligence proactively set the Ethics Guidelines for Trustworthy AI that were applicable to model development. They established that AI should always be designed to be:

  1. Lawful: Respecting applicable laws and regulations.
  2. Ethical: Respecting human ethical principles.
  3. Robust: Both from a technical and sustainable perspective

The Algorithmic Accountability Act in the USA that dates from November 2019 is another example of a legal initiative that also aimed to set a framework for the development of algorithmic decision-making systems and has also served as a reference to other countries, public institutions and governments.

Fast forward to the present day, the European Commission proposed on 21st April 2021 new rules and actions with the ambition of turning Europe into the global hub for trustworthy AI by combining the first-ever legal framework on AI and a new Coordinated Plan with Member States. This new plan aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across Europe. New rules will be applied in the same way across all European countries following a risk-based approach, and an Intelligence Board will facilitate implementation and drive the development of AI standards.

 

The opportunity in regulation

Governance in AI, such as that which the EU is driving, should not be considered as an evil. If performed accurately, AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition. Moreover, governance would allow us to legally frame the boundaries on acceptable risks and benefits of AI monetisation while ensuring that any project is planned for success.

“AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition.”

Regulation actually opens a new market for consultancies that help other companies and organisations manage and audit algorithmic risks. Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in dozens of contexts, heads the Online Risk Consulting & Algorithmic Auditing (ORCAA), a company that was set up to help companies identify and correct any potential biases in the algorithms they use.

Counting on a potential international legislator or auditor would also allow those that achieve the “Audited player label” to project a positive brand image while remaining competitive. Using an analogy that relates to drug development: Modern society relies on medicines prescribed by doctors because there is an inherited trust in their qualifications, and because doctors believe in the compulsory clinical trial processes that each drug goes through before hitting the market.

 

Final thoughts

Simply put, AI has no future without us humans. Systems collecting the data typically have no way to validate the data they collect and in which context the data is recorded and collected. Data has no intuition, strategic thinking or instincts. Technological advancements are shaping the evolution of our society, but each and every one of us is responsible for paying close attention to how AI, as one of these main advancements, is used for the benefit of the greater good.

If you and your organisation want to be ahead of the game, don’t wait for regulation to come to you, but take proactive steps prior to any imposed regulatory shifts:

  • It must be well understood that data is everything. Scientists strive to ensure the quality of any data set used to validate a hypothesis and go to great lengths to eliminate unknown factors that could alter their experiments. Controlled environments are the essence of well-designed analytical modelling.
  • Design, adapt and improve your processes to learn how to establish an internal “auditing” framework. Something like a minimal viable checklist that allows your team to work on fair AI while others are still trying to squeeze an extra 1% accuracy from a ML model. Being exposed to the risk of deploying a biased algorithm that may potentially harm your customers, scientific reputation and your P&L is not appealing.
  • Design and build repositories to document all newly created governance and regulatory internal processes so that all work is accessible and can be fully disclosed to auditors or regulators when needed, increasing external trust and loyalty to your scientific work.
  • Maintaining diverse teams, both in terms of backgrounds, demographics and in terms of skills is important for avoiding unwanted bias. While in the STEM world women and people of colour remain under-represented, these may be the first people to notice these issues if they are part of the core modelling and development team.
  • Be a promoter and activist for change in the field. Ensure that your communications team and technical leaders take part in AI ethics associations or debates of the like. This will allow your organisation to be rightly considered a force for change.

All these are AI strategic mechanisms that we use at Bedrock and that allow the legal and fair utilisation of data. The greatest risk for you and your business not only lies in ignoring the potential of AI, but also in not knowing how to navigate AI with fairness, transparency, interpretability and explainability.

Responsible AI in the form of internal control, governance and regulation should not be perceived as a technical process gateway or as a burden on your board of directors, but as a potential competitive advantage, representing a value-added investment that still is unknown for many. An organisation that successfully acts on its commitment to ethical AI is poised to become a thought leader in this field.

Back to Articles

Jul 23 — 2021

How using adaptive methods can help your network perform better

How using adaptive methods can help your network perform better

Intro

An Artificial Neural Network (ANN) is a statistical learning algorithm that is framed in the context of supervised learning and Artificial Intelligence. It is composed of a group of highly connected nodes called neurons that connect an input layer (input), and an output layer (output). In addition, there may be several hidden layers between the previous two, a situation known as deep learning.

Algorithms like ANNs are everywhere in modern life, helping to optimise lots of different processes and make good business decisions. If you want to read a more detailed introduction to Neural Network algorithms, check out our previous article, but if you’re feeling brave enough to get your hands dirty with mathematical details about ways to optimise them, you’re in the right place!

Optimisation techniques: Adaptive methods

When we train an artificial neural network, what we are basically doing is solving an optimisation problem. A well optimised machine learning algorithm is a powerful tool, it can achieve better accuracy while also saving time and resources. But, if we neglect the optimisation process, we can cause very negative consequences. For instance, the algorithm might seem perfect during the tests and fail resoundingly in the real world, or we could have incorrect underlying assumptions about our data and amplify them when we implement the model. For this reason, it is extremely important to spend time and effort optimising a machine learning algorithm and, especially, a neural network.

The objective function that we want to optimise, – in particular, minimise-, in this case is the cost function or loss function J, which depends on the weights \omega of the network. The value of this function is the one that informs us of our network’s performance, that is, how well it solves the regression problem or classification that we are dealing with. Since a good model will make as few errors as possible, we want the cost function to reach its minimum possible value.

If you have ever read about neural networks, you will be familiar with the classic minimisation algorithm: the gradient descent. In essence, gradient descent is a way to minimise an objective function – J(\omega) in our case – by updating its parameters in the opposite direction of the gradient of the objective function with respect to these parameters.

Unlike other simpler optimisation problems, the J function can depend on millions of parameters and their minimisation is not trivial. During the optimisation process for our neural network, it is common to encounter some difficulties like overfitting or underfitting, choosing the right moment to stop the training process, getting stuck in local minima or saddle points or having a pathological curvature situation. In this article we will explore some techniques to solve these two last problems.

Neural networks

Remember that gradient descent updates the weights \omega of the network in a step t + 1 th as follows mechanisms, concluding what happened and what is likely to happen next.

In order to avoid these problems, we can input some variations in this formula. For instance, we could alter the learning rate \alpha, modify the component relative to the gradient or even modify both terms. There are many different variations that modify the previous equation, trying to adapt it to the specific problem in which they are applied; this is the reason why these are called adaptive methods.

Let’s take a closer look at some of the most commonly used techniques:

  1. Adaptive learning rate

The learning rate \alpha is the network´s hyperparameter that controls how much the model must change, based on the cost function value, each time the weights are updated; it dictates how quickly the model adapts to the problem. As we mentioned earlier, choosing this value is not trivial. If \alpha is too small, the training stage takes longer and the process may not even converge, while if it is too large, the algorithm will oscillate and may diverge.

Although the common approach taking \alpha = 0.01 provides good results, it has been shown that the training process improves when \alpha stops being constant and starts depending on the iteration “t”. Below are three options that rephrase \alpha’s expression:

Exponential decay

Inverse decay

Potential decay

The constant parameter “k” controls how \alpha_t decreases and it is usually set by trial and error. In order to choose the initial value of \alpha, \alpha_0, there are also known techniques, but they are beyond the scope of this article.

Another simpler approach that is often used to adapt \alpha consists in reducing it by a constant factor every certain number of epochs – training cycles through the full training dataset -. For example, dividing it by two every ten epochs. Lastly, the option proposed in [1] is shown below,

where \alpha is kept constant during the first \tau iterations and then decreases with each iteration t.

 

Adaptive optimisers

  • Momentum

We have seen that when we have a pathological curvature situation, the descent of the gradient has problems in the ravines [Image 2],  , in the parts of the area where the curvature of the cost function is much greater along one dimension than the others. In this scenario, the gradient descent oscillates between the ridges of the ravine and progresses more slowly towards the optimum. To avoid this, we could use optimisation methods such as Newton’s known method, but this may significantly raise the computational power requirements since it would have to evaluate the Hessian matrix of the cost function for thousands of parameters.

The momentum technique was developed to dampen these oscillations and accelerate convergence of the training. Instead of only considering the value of the gradient at each step, this technique accumulates information about the gradient in previous steps to determine the direction in which to advance. The algorithm is set as follows:

where \beta \in [0,1] and m_0 is equal to zero.

If we set \beta = 0 in the previous equation, we see that we recover the plain gradient descent algorithm!

As we perform more iterations, the information of gradients from older stages has a lower associated weight; we are making an exponential moving average of the value of the weights! This technique is more efficient than the simple moving average since it quickly adapts the value of the prediction of fluctuations in recent data.

 

  • RMSProp

The Root Mean Square Propagation technique, better known as RMSProp, also deals with accelerating convergence to a minimum, but in a different way from Momentum. In this case we do not adapt the gradient term explicitly:

We have now introduced “v_t” as the exponential moving average of the square of gradients. As an initial value it’s common to take v = 0 and the constant parameters equal to \beta = 0.9 and \epsilon = 10^{-7}.

Let’s imagine that we are stuck at a local minimum and the values of the gradient are close to zero. In order to get out of this “minimum zone” we would need to accelerate the oscillations by increasing \alpha. Reciprocally, if the value of the gradient is large, this means that we are at a point with a lot of curvature, so in order to not exceed the minimum, we then want to decrease the size of the step. By dividing \alpha by that factor we are able to incorporate information about the gradient in previous steps and increase \alpha when the magnitude of the gradients is small.

 

  • ADAM

The AdaptativeMomentOptimization algorithm, better known as ADAM, combines the ideas of the two previous optimisers above.

\beta_1 corresponds to the parameter of the Momentum and \beta_2 to the RMSProp.

We are adding two additional hyperparameters to optimise in addition to \alpha, so some might find this formulation counterproductive, but it is a price to be paid if we aim to accelerate the training process. Generally, the values taken by default are \beta_1 = 0.9, \beta_2 = 0.99 and \epsilon = 10^{-7}.

It has been empirically shown that this optimiser can converge faster to the minimum than other famous techniques like the Stochastic Gradient Descent.

Lastly, it is worth noting that it is common to make a bias correction in ADAM’s equation: This is because at the first stages ​​we would not have much available data from previous ones, and then the formula would be reformulated with

\beta_1 corresponds to the parameter of the Momentum and \beta_2 to the RMSProp.

We are adding two additional hyperparameters to optimise in addition to \alpha, so some might find this formulation counterproductive, but it is a price to be paid if we aim to accelerate the training process. Generally, the values taken by default are \beta_1 = 0.9, \beta_2 = 0.99 and \epsilon = 10^{-7}.

It has been empirically shown that this optimiser can converge faster to the minimum than other famous techniques like the Stochastic Gradient Descent.

Lastly, it is worth noting that it is common to make a bias correction in ADAM’s equation: This is because at the first stages ​​we would not have much available data from previous ones, and then the formula would be reformulated with

Conclusions

In summary, the goal of this article is to introduce some of the problems that may arise when we wish to optimise a neural network and the most well-known adaptive techniques to tackle them. We’ve seen that the combination of a dynamic alpha with an adaptive optimiser can help the network learn much faster and perform better. We should remember, however, that Data Science is a field in constant evolution and while you were reading this article, a new paper may have been published trying to prove how a new optimiser can perform a thousand times better than all the ones mentioned here!

In future articles we will look at how to tackle the dreaded problem of an overfitting model and the vanishing gradient. Until then, if you need to optimise a neural network, don’t settle for the default configuration, use these examples to try to adapt it to your specific real problem or business application 🙂

REFERENCIAS:

[1] Bengio, Y. 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2.

[2] Intro to optimization in deep learning: Momentum, RMSProp and Adam – Ayoosh Kathuria https://blog.paperspace.com/intro-to-optimization-momentum-rmsprop-adam/

[3] Kingma, Diederik and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv

preprint arXiv:1412.6980.

[4] Zhen Xu, Andrew M. Dai, Jonas Kemp, Luke Metz. 2019. Learning an Adaptive Learning Rate Schedule. arXiv:1909.09712v1.

Back to Articles

Jan 8 — 2021

Trends in Data Science & AI that will shape corporate strategy and business plans

data science trends 2021

Intro

In response to the most atypical year that many of us may have probably lived, leading companies have wisely reconsidered where to place their money. Inevitably, the pandemic has noticeably sped up the adoption of Artificial Intelligence (AI) and has also motivated business leaders to accelerate innovation in the pursuit of new routes to generate revenue and with the aim of outrunning the competition.

 

Companies and clients entering a digital transformation and wanting to become data-driven can be overwhelmed by the sheer amount of technological solutions, tools and providers. Keeping up to date with the trends in the discipline may help. So without further ado, let’s take a look at what to watch for in the year ahead with regard to practices, talent, culture, methodologies, and ethics involving business strategy and organisational development.

 

From Data to AI literacy

The structural foundations of a company are its people, so ensuring that your workforce is ready to embrace change, because they understand what change means, is still the first step to take in the AI journey. Data literacy was a trend in late 2019 and throughout 2020, now together with AI literacy both will continue to be a trend for the next two years.

If you think about it, Computer literacy is now a commodity that allows us to engage with society, i.e. finding a job, ordering food, etc, in ways previously unimaginable. Similarly, AI literacy is becoming increasingly important as AI systems become more integrated into our daily lives and even into our personal devices. Learning how to interact with AI-based systems that are fuelled with data provides more options in the consumption and use of technology and will pave the way for successful AI adoption in the corporate world.

In order to have AI literacy we must first have data literacy. To understand what an AI algorithm does, you must firstly know which data you have at hand, understand its meaning and where it’s sourced and how it is produced, and only then can you know how to extract its value using AI. Therefore, investments in educational workshops, seminars and similar exercises for preparing all business units to understand how Data Science and AI will impact their lives and work is instrumental, especially because of the huge amount of preconceived notions that are floating about out there.

From abstract to actionable. CDO´s and human-centered methodologies.

AI has been on everyone’s minds as the next major step for humanity, but companies that have previously struggled to find a measurable return on their investment will now be taking a more pragmatic approach , making special adjustments right from the start. These adjustments may be structuring and reshaping the whole organisation so that AI is welcomed and supported appropriately and then correctly embedded across all business units.  One of the advantages of practical AI is that it can achieve ROI in real time, so this could be the year many organisations see their AI efforts begin to really pay off.

So far, statistics have shown that the vast majority of data projects never get completed or just fail to deliver, so the use of human-centered and Design Thinking approaches are relevant when trying to put strategy and ideas into action and to really know where to start and create impact.

Data scientists and engineers that have been working in silos will now be placed transversely across the whole organisation. We will see an empowerment of the Chief Data Officer role and its vertical to support initiatives that affect all organisational layers and this will ensure that any Machine Learning or Robotic Process Automation projects are aligned with the overall business strategy, creating a quick quantitative and qualitative improvement of the existing  operations. In turn, data professionals will need to quickly adapt by developing their soft skills, such as communication and business acumen, otherwise there is a good chance the clash between data professionals and business executives will continue, ultimately resulting in AI investments not paying off.

Growing lack of specialised talent

It is now difficult for a company to attract talent in this field, the demand for data professionals and AI specialists vastly exceeds the academic supply and the majority of companies currently lack the technical talent required to build scalable AI solutions. As a result salaries are going up which in turn leads to the majority of companies not being able to afford to build an internal data science team.

On the other hand, online learning portals have been providing courses and certifications that allow professionals to get up to speed, but those courses alone don’t teach everything that you need for the job; they are only a complement to other forms of training and hands-on projects. Therefore, senior specialists will still be a scarce resource and the current situation will not improve, but get worse.

One of the viable solutions to overcome this hurdle may be to provide access to self-service platforms i.e. automated machine learning tools as a way to optimise all processes that currently require highly specialised roles. This takes us to the next trend.

Self-service Data Science and AI

Taking into account the rising demand for data professionals, organisations who are unable to hire are facing the risk of being left behind.  As a consequence, a growing number of companies are turning to no-code or AutoML platforms that would help to assist throughout the complete data science workflow, from the raw dataset preparation to the deployment of the machine learning model. The underlying objective of these “self-service business models” is to harness the commercial opportunity that the growing lack of talent scenario presents.

With the rise of no-code or low-code AI platforms, we could start to wonder: Will the job of a data scientist, data engineer or data analyst disappear or will it just evolve? In my view, although a growing number of tools rightfully promise to make the field of data science more accessible, the average Joe will not be able to make the most out of these tools and projects will be very technically limited. There are solutions providing users with attractive user interfaces and a lot of prebuilt components that can ease Machine Learning developments, but I would dare to say that we are still 4 to 5 years away from a massive democratisation of the Data Science practice ( BI took more than 15 years as a reference) I am, however,  pretty convinced that in 2021 we will see many more self-service solutions both offered and implemented.

Automated Data Preprocessing

Inherently related to the previous trend, many Data Scientists have historically agreed that one of the most tedious and complex steps is preparing data sets to be analysed or used in order to develop, train and validate models.

Feature engineering for a Machine Learning algorithm can be more entertaining, Dimensionality Reduction in the form of Principal Component Analysis can be challenging but transforming and cleansing data sets is overwhelming and is certainly time consuming. New Python libraries and packages are emerging where the preprocessing step is automated, saving up to 80% of the time that is currently spent on early project stages. The trade-off, or even drawback, may be that data scientists would be unaware of how the resulting data sets’ features were transformed and some specific pieces of knowledge could be lost along the way.

Nevertheless, even if the preprocessing of data is automated, some data engineering tasks will still be performed manually such as moving data from different silos to unified data warehouses. This task may be the most time consuming part of the process; and it will be very hard to automate because it is case-specific.

IT ( CIOs) vouching hard for AI

In 2021 we would expect organisations to start to see the benefits of executing their AI and ML models at a global scale, not only getting them into production for some “local” or specific use cases, but also pushing them horizontally. IT will not and cannot continue to be a “bureaucracy and technical requirements gateway” for AI projects and in 2021 IT will have to keep evolving to be an innovation hub and a “visionary instrument” for businesses. CIOs around the globe will push for AI to be embedded across the whole organisation and the spin-off of the CDO office from the traditional CIO vertical will now speak volumes about a companies´ digital maturity and about the progress of their digital transformation journey.

Explainable and Ethical Data Science and AI

The more data science and analytics are central to the business, the more data we retrieve and merge, the higher the risk of violating customer privacy. Back in 2018 the big shift was GDPR, followed by browsers taking a stand on data privacy and now Google is planning to deprecate third-party cookies by 2022. In 2021 the ethics and operational standards behind analytical and predictive models will come into focus so that any AI mechanisms are clear from biases.

On one hand, algorithm fairness and on the other, the transparency and quality of the data sets that are used to train and validate these algorithms are two of the issues in the spotlight while companies will not be able to afford “black boxes” anymore. Leaders will be proactively managing data privacy, security and ethical use of data and analytics. It’s not only the right thing to do and an increasing legal requirement, but an essential practice to gain trust and credibility both when used inhouse to make decisions, and when used to outsmart competitors.

Augmented Intelligence going mainstream

The term data-driven has been in the mouths of many, but in reality only a small percentage have put it into practice. The maturity of the data technology and the expertise of data professionals will now enable the decision making process, at any level in the company, to be almost fully automated and data-driven. Note the word “almost”, as output from models can complement human thinking, but not completely overrule it as analytics are not perfect either i.e. variables could be missing or data may be biased from the source.

Augmented Intelligence, also known as Machine Augmented Intelligence, Intelligence Amplification and Cognitive Augmentation, has referred, since 1950, to the effective use of IT mechanisms for augmenting human intelligence. Corporations will now be building up their Augmented Intelligence capabilities where human thinking, emotions and subjectivity are all combined and strengthened with AI´s ability to process huge amounts of data, allowing these Corporations to make informed decisions and to plan and forecast accurately.

Again, this does not mean that the AI algorithm will dictate and tell C-level executives how to run their business, but will certainly provide them with the best guidance available by providing possible outputs with the data that is fed to it. 

Affordable modelling of unstructured datasets

Natural Language Processing, Computer Vision and other forms of unstructured data processing are being improved day by day. In addition to the effort of hundreds and maybe thousands of experts  in refining these AI models,  the increase in remote working will drive the greater adoption of technologies that embed NLP, Automated Speech Recognition (ASR) and other Capabilities of the like. Computing platforms have made it possible with tools like Google Natural Language API for every company to be able to use a Deep Learning NLP without needing to train the model locally. This affordable modelling could also coin the name of AI as a Service soon. The advancements in this field will allow small and medium businesses to process data in unstructured formats because of the accessibility to validated algorithms paired with affordable cloud processing power.

Data Storytelling and Artistic Data Viz go mainstream hand-in-hand

Any form of advanced analytics, whether it is descriptive, predictive or prescriptive does not make a lasting impact and cannot reach its full potential if insights are not communicated properly. Representing data in appealing visual ways while surrounding the numerical findings with the proper narrative and storytelling elements is now the recipe for success in the data science realm. The algorithm selection and the data set where the model was trained is undoubtedly critical, but presenting the findings and conclusions of “why something happened”, “how it could have happened” and “what could we have done about it” have never been as important as today, due to the complexity of the analytics beneath the surface.

Conclusions

Companies and organisations rely on data to drive their innovation agenda. However, business leaders still face significant challenges to make the most out of their investment in an immature data-driven culture.  Data as an asset, Data Science as a tool and Artificial Intelligence as a discipline will encompass the next revolution for humans and we are lucky to be present.

Recapping, we should expect 10 main trends in Data Science, Data Analytics and the Artificial Intelligence space in 2021:

  1. Data Science and AI literacy will continue to be a trend because humans remain at the centre.
  2. Artificial Intelligence moves from abstract to actionable and the CDO role will gain importance.
  3. The lack of specialised talent will not cease to grow.
  4. Self-service solutions. Many autoML solutions will thrive during the next few years, empowering non-technical users to be rookie data scientists. The self-service option will contribute to widespread adoption, but AI and Data Science consultants will still be critical to drive these initiatives both on a strategic and hands-on level.
  5. Automated Preprocessing of data could soon be feasible, allowing Data Scientists and Data engineers to focus on what really adds value to the business.
  6. IT is pushing for AI harder than ever before.
  7. Explainable, transparent and ethical data management will be on top of all agendas. The value derived from a predictive analytics project will not justify the means to an end.
  8. Augmented Intelligence will allow companies to outrun competition.
  9. Affordable modelling of unstructured datasets will result in a massive adoption of cutting-edge AI solutions.
  10. Data Storytelling and Dataviz will not be the icing on the cake, but the key ingredient in the data science recipe

The next couple of years will surely show a shift in AI from being an emerging technology to a widespread adoption.

Back to Articles

Dec 11 — 2020

Omitted Variable Bias in Machine Learning models for marketing and how to avoid it

Intro

This isn’t a highly technical article explaining the maths of Omitted Variable Bias (OVB), there are plenty of brave individuals who have already taken this approach, and theirs can be read here (1) or here (2). Instead, this is an article discussing what OVB is in plain English and its implications for the world of marketing and data.

Let’s start with the basics: what is OVB? We could define it technically:

When doing regression analysis while omitting variables that affect the relationship between the dependent variable and included explanatory variables, researchers don’t get the true relationship. Therefore, the regression coefficients are hopelessly biased, and all statistics are inaccurate. (2)

(6)

Instead, we’re going to explain it more simply: if you developed a model that makes predictions considering some relevant factors, but not all relevant factors, then the predictions will never be entirely reliable, because you cannot make an accurate prediction if you don’t have access to all the relevant information.

It’s like trying to predict the temperature just by looking out of the window, with no more information than what you see: sometimes your prediction will be right; that sunny implies hot, but a significant number of times you will be wrong. For instance, if you predict that it’s hot just because it’s sunny in the middle of the winter; or if you do it during the summer in Utqiagvik, in the north of Alaska (3). So, in this imaginary scenario, there would be at least two extra variables that we should be considering: location and season.

Another interesting example that a marketer could relate to:

Let’s imagine that last spring, a bathing suits brand saw that sales were really low and decided to change their media/creative agency. The new media/creative agency starts collaborating with them, and right at the moment their first campaign airs, sales spike. The brand is really impressed with their new agency’s performance, and decides to extend the contract for 3 years.

A few months later, they analyse the data in greater detail, and realise that during spring they had their highest market share in the history of the brand, and that it kept improving during the season. When the new agency started, their share lowered, and it is now back to the levels it was one year ago.

How could this have happened? Because they were omitting the most important variable in their sales: the weather. It had been awful during spring, and right when their new campaign aired summer was starting. There was good weather for the first time that year, so everybody was running out to buy a bathing suit, which they hadn’t done before because they wouldn’t have been able to use it. In their hasty decision, maybe made by people who didn’t even live in the country where this happened, they had completely missed this. They let a media agency go that was actually giving them better results than the new one, with whom they now had a 3 year contract, causing them significant revenue loss.

These were simple imaginary scenarios, which I hope have convinced you that Omitted Variable Bias isn’t just some “mathematical thing”, but a real-world challenge to which companies should pay close attention if they want to make effective decisions. In the earlier examples in this article the missing variable was obvious, but sometimes it’s not so easy. How can we ideate a robust model where all (or as many as possible) relevant variables are taken into account?

(7)

 

By doing a thorough data discovery (4) process in which all stakeholders are on board and all processes are mapped. Before chasing any machine learning application, we must find out what the relevant variables might be by:

  • Looking into our customer’s journey, and analysing their interactions and behaviour through each stage, with the help of the stakeholders who are there along the way: both internal and external (even customers). This is proper Journey Analytics.
  • Considering the 8 Ps of marketing: Product, Place, Price, Promotion, People, Processes, Physical evidence, Productivity & quality (5), and assessing how each of them might be relevant as input for predictive modelling.

Once this is done, we will end up with a comprehensive list of all the critical variables, and we can start designing and building a data warehouse, if there isn’t one already, and then, finally, start building the model. During this process we must not forget about everything we have worked on before. Instead, it’s the moment at which, by exploring the data and the model’s results, we can find out if the model’s outputs are fully predicted by the variables, if not then we are still missing something. We can then deploy an imperfect model (if it’s good enough), prototyping quickly, and following with various iterations to refine it, or go back to earlier stages of ideation.

In a nutshell, for building a model that resembles reality we must first identify the right input, and for that we need to involve every stakeholder in the ideation process. Not doing so could lead to incomplete models that, instead of assisting us in decision making, misguide us. Developing and relying on a more complete and accurate data model will lead us to make more effective and powerful data-driven decisions, that in the end will help us attain our main goals: more customers, and more satisfied customers.

Sources:

(1) https://www.econometrics-with-r.org/6-1-omitted-variable-bias.html

(2) https://www.hindawi.com/journals/ads/2012/728980/

(3) https://en.wikipedia.org/wiki/Utqiagvik,_Alaska

(4) https://bi-survey.com/data-discovery

(5) https://www.professionalacademy.com/blogs-and-advice/marketing-theories—the-marketing-mix—from-4-p-s-to-7-p-s

(6) https://sites.google.com/site/modernprogramevaluation/variance-and-bias

(7) https://www.juancmejia.com/y-bloggers-invitados/mega-guia-de-descubrimiento-de-datos-data-discovery-que-es-beneficios-y-mejores-practicas/

Back to Articles

Dec 1 — 2020

A short introduction to Neural Networks and Deep Learning

neural network

Introduction

In this article I attempt to provide an easy-to-digest explanation of what Deep Learning is and how it works, starting with an overview of the enabling technology; Artificial Neural Networks. As this is not an in-depth technical article, please take it as a starting point to get familiar with some basic concepts and terms. I will leave some links along the way, for curious readers to investigate further.

I am working as a Data Engineer at Bedrock, and my interest in the topic arose due to my daily exposure to doses of Machine Learning radiation emitted by the wild bunch of Mathematicians and Engineers sitting around me.

Deep Learning roots

The observation of nature has triggered many important innovations. One with profound socioeconomic consequences arose in the attempt to mimic the human brain. Although far from understanding its inner workings, a structure of interconnected specialised cells exchanging electrochemical signals was observed. Some imitation attempts were made until finally Frank Rosenblatt came out with an improved mathematical model of such cells, the Perceptron (1958).

The Perceptron

Today’s Perceptron, at times generalised as the ‘neuron’, ‘node’ or ‘unit’ in the context of Artificial Neural Networks, can be visually described as below:

- the Perceptron -

It operates in the following manner: every input variable is multiplied by its weight, and all of them, together with another special input named ‘bias’, are added together. This result is passed to the ‘activation function’, which finally provides the numerical output response (‘neuron activation’). The weights are a measure of how much an input affects the neuron, and they represent the main ‘knobs’ we have at our disposal to tune the behaviour of the neuron. The Perceptron is the basic building block of Artificial Neural Networks.

Deep Neural Networks (DNN)

Deep Neural Networks are the combination of inputs and outputs of multiple different Perceptrons on a grand scale, where there may be a large number of inputs, outputs and neurons, with some variations in the topology, like the addition of loops, and optimisation techniques around it, as you can see in the picture below:

- Multi-layer Perceptron or Feedforward neural network -

We can have as many inputs, outputs, and layers in between as needed. These kinds of networks are called ‘feedforward-networks’, due to the direction of data flowing from input to output.

  • The leftmost layer of input values in the picture (in blue) is called ‘input layer’ (with up to millions of inputs).
  • The rightmost layer of output perceptrons (in yellow) is called the ‘output layer’ (there can be thousands of outputs). The green cells represent the output value.
  • The layers of perceptrons in between (in red) are called ‘hidden layers’ (there can be up to hundreds of hidden layers, with thousands of neurons).

The word ‘deep’ refers to this layered structure. Although there is not total agreement on the naming, in general, we can start to talk about Deep Neural Networks, once there are more than 2 hidden layers.

To get an idea of what I’m talking about, a 1024×1024 pixels image, with 1000 nodes in the 1st layer, 2 outputs, and 1 bias input will have over 3 billion parameters.

Choosing the right number of layers and nodes is not a trivial task, as it requires experimentation, testing, and experience. We can’t be sure beforehand which combinations will work best. Common DNNs may have between 6 to 8 hidden layers, with each layer containing thousands of perceptrons. Developing these models is therefore not an easy task, so the cost-benefit trade off needs to be evaluated: a simpler model can sometimes provide results that are almost as good, but with much less development time. Also, teams with the skills to develop Neural Networks are not yet commonplace.

Deep Learning (DL)

Deep Learning is a branch of Artificial Intelligence leveraging the architecture of DNNs to resolve regression or classification problems where the amount of data to process is very large.

Suppose we have a set of images of cats and a set of images of dogs, and we want a computer program that is able to label any of those pictures as either a cat picture or a dog picture with the smallest possible error rate, something called an ‘image classification problem’. As a computer image is basically numerical data we can, after applying some transformations, introduce it as input to our network. We configure our network based on the nature of the problem, by selecting an appropriate number of inputs, outputs, and some number of layers and neurons in between. In our case, we want our network to have two outputs, each associated with a category, one representing dogs, and the other one cats. The actual output value will be a numerical estimation representing how much the network ‘thinks’ that the input picture could be either one category or the other:

The outputs are probability values of the image being a dog or a cat (although in many cases they do not necessarily add up to 1). The initial set of weights is randomly chosen, and therefore, the first response of our network to an input image, will also be random. A ‘loss function’ encoding the output error will be calculated based on the difference between the expected outcome and the actual response of the network. Based on the discrepancy reported by the loss function, the weights will be adjusted to get to a closer approximation.

This is an iterative process. You present a batch of data to the input layer, and then the loss function will compare the actual output against the expected one. A special algorithm (backpropagation) will then evaluate how much each connection influenced the error by ‘traversing’ the network backwards to the input layer, and based on that, it will tweak the weights to reduce the error (towards minimising the loss function). This process goes on by passing more images to the network, from the set of images for which we already know the outcome (the training set). In other words, for every output that produces a wrong prediction/estimation, we should reinforce those connections’ weights for the inputs that would have contributed to the correct prediction.

We will use only a fraction of the labelled dataset (training set) for this process, whilst keeping a smaller fraction (test set) to validate the performance of the network after training. This process is the actual network ‘learning’ phase, as the network is somehow building up ‘knowledge’ from the provided data, and not just memorising data. The larger the amount of quality data we feed in, the better the network will perform over new, unseen data. The key point to grasp here is that the network will become able to generalise, i.e. to classify with high accuracy an image that has never seen before.

Why now?

Only in recent years has DL become very popular, despite the fact that most of the foundational work has been around for decades. There has been a lot of friction, caused by technological limitations and other challenges, against the widespread use of DL, with some recent breakthroughs being key to the current adoption of the technology. Just to mention a few factors:

  1. Deep Learning algorithms are data hungry. They only perform well when large labelled datasets are available. Businesses have finally started to give all the deserved importance to serious data collection strategies, which is already paying off and will even more so in the near future. Not using these techniques should no longer be an option. As of 2017, 53% of companies were already adopting big data strategies, and in 2019, 95% of businesses needed to manage big data.
  2. Deep Learning computational requirements are extremely demanding. The time taken to properly train a Neural Network was simply impractical in most cases given the available technology. Now we have efficient distributed systems, GPU architectures, and cloud computing at reasonable prices. Therefore, every business can now rely on on-demand computational power without the burden of having to set up their own infrastructure running the risk of quick obsolescence, and thus are able to exploit DL power at lower cost.
  3. Algorithmic challenges. There were important issues to get the optimisation algorithms to work on more than 2 hidden layers. Thanks to breakthrough ‘discoveries’ like backpropagation, convolution, and other techniques, there has been a way to drastically reduce the amount of ‘brute-force’ computational requirements. Also, thanks to available online content and toolsets like ‘tensorflow’, most people can finally experiment and learn in order to create the most diverse applications out of these techniques.

Use Cases

Deep Learning can be used for Regression and Classification tasks, from small to large scale, although for small scale issues other Machine Learning techniques are more suitable. When larger datasets are involved, together with the necessary computational resources, Deep Learning is probably the most powerful Machine Learning technique. I will list here only a few use cases with common applications:

  • Recommender systems: mostly used in e-commerce applications, to predict ratings or user preferences (the Amazon recommendation engine, for example).
  • Speech synthesis/recognition: used in verbal/oral communication with machines, as an alternative or replacement to more traditional types of human/machine interactions (like Apple’s Siri assistant).
  • Text processing: applications can predict textual outputs based on previous inputs, as in search text completions (Google search bar, for example).
  • Image processing/recognition: used where heavy loads of images (including video) need to be processed, as in computer vision, satellite, medical imagery analysis, object detection, autonomous driving.
  • Game playing: systems that can learn from previous games,and compete against humans (DeepMind, AlphaGo).
  • Robotics: advanced control systems for industrial automation, robots with special physical abilities that could replace human workers in hostile environments.

The good thing is, that you can find most of those applications already at work within your phone!

In the case of games, there was a public challenge between a professional Go player and a Team of experts that developed a Deep Learning application, nicknamed AlphaGo. AlphaGo won the challenge, winning 4 games and losing just 1. It was initially trained from existing Go game datasets generated by communities of online gamers, and with the input of some professional players. From a certain point, AlphaGo was set to learn and improve by playing against itself. Expert players declared that AlphaGo came out with beautiful and creative moves, as they witnessed a machine making moves that no professional would have thought of doing until that moment (Go has a quite long tradition). As an analogy for commercial applications, unexpected business insights may be generated using deep learning techniques that no human could have foreseen or guessed through traditional analytical models or from his own experience.

Other impressive results from DL applications have recently been achieved in the automated text generation with OpenAI GPT-3 new algorithms. This is another special DL network that can work with unlabelled data, as it will automatically detect patterns from very large textual datasets. These networks are able to generate text contents that may often appear as if they were written by humans. Remarkably the entire English Wikipedia apparently makes up less than 1% of the training data used to train GPT-3! You can see GPT-3 at work here:

  • https://openai.com/blog/openai-api/
  • https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

Considerations

Despite the many practical applications, their usage is still, if not complex, a very tricky one. It is possible to generate a deep learning model with little prior DL knowledge, but in most cases, we’re likely to obtain misleading results. The handling of models running in any critical or sensitive environment, should be left to people with the right technical expertise.

The quality of the data we feed in when training a DNN, is of key importance. It is a common fact that many projects involving DL, despite having very sophisticated models, at times cannot go live simply because real data does not match the model required standards. As the saying goes; ‘garbage in, garbage out’. To make the most out of these analytical methods and architectures, it is critical to implement a strong data culture, establishing robust collection, usability, and compliance strategies, also embedding education and training mechanisms at the core of the business.

There are also ethical issues arising from some biased results generated by DL, and the generation of false propaganda/information (deep fakes).

Also, there is still quite some mystery as to the inner workings of DL, which may open the door to potential issues that may be difficult to detect and avoid. In fact, it is possible to manipulate an image so that a human may not perceive it, whereas a machine might misclassify it completely.

Acquiring a better understanding of the possibilities given by these machine learning algorithms, identifying when DL is really an option to be considered, will surely allow us to set the right goals and expectations.

 

Conclusion

We should not consider DL in any way as related to human intelligence. We are still not even close to such complexity. However, the way we should embrace this branch of Artificial Intelligence is as another, very powerful extension of our capabilities, rather than as a threat to our jobs. Threats come from misuses…but that’s another story. Possibly the most obvious differentiator between humans and any other known form of life, is our ability to build tools, and Deep Neural Networks are among the most promising we dispose of today.


Bedrockdbd

We design & implement human-first,technologically neutral, data-centric ecosystems driven by Science

Posts navigation

Older posts

Newsletter
Discover the latest on our work, scientific breakthroughs, content, and events.

By clicking "subscribe" I accept the privacy policy.

Data by Design
Our work
Podcast
Articles

Team
Careers
La Pipa
Contact

Let's talk

+34 984 886 003
+44 20 4525 1661

©BEDROCK Intelligence S.L.U. | hello@bedrockdbd.com | Privacy Policy & Cookies