Aug 19 — 2021

The opportunity to apply responsible AI (Part 1): Guidelines, Data Science tools, legal initiatives, and tips.

Intro

Dramatic increases in computing power have led to a surge of Artificial Intelligence applications with immense potential in industries as diverse as health, logistics, energy, travel and sports. As corporations continue to operationalise Artificial Intelligence (AI), new applications present risks and stakeholders are increasingly concerned about the trust, transparency and fairness of algorithms. The ability to explain the behaviour of each analytical model and its decision-making pattern, while avoiding any potential biases, are now key aspects when it comes to assessing the effectiveness of AI-powered systems. For reference, bias is understood as the prejudice hidden in the dataset used to design, develop and train algorithms, which can eventually result in unfair predictions, inaccurate outcomes, discrimination and other similar consequences. Computer systems cannot validate data on their own, but are empowered to confirm decisions and here lies the beginning of the problem. Traditional scientists understand the importance of context in the validation of curated data sets. However, despite our advances in AI, the one thing we cannot program a computer to do is to understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.
“A computer cannot understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.”

Historical episodes of failed algorithmia and black boxes

Since the effectiveness of AI is now measured by the creators´ ability to explain the algorithm’s output and decision-making pattern, “Black boxes” that offer little discernible insight into how outcomes are reached are not acceptable anymore. Some historical episodes that brought us all here have demonstrated how critical it is to look into the inner workings of AI.
  • Sexist Headhunting: We need to go back to 2014 to understand where all this public awareness on Responsible AI began. Back then, a group of Scottish Amazon engineers developed an AI algorithm to improve headhunting, but one year later that team realised that its creation was biased in favour of men. The root cause was that their Machine Learning models were trained to scout candidates by finding terms that were fairly common in the resumés of past successful job applicants, and because of the industry´s gender imbalance, the majority of historical hires tended to be male. In this particular case, the algorithm taught itself sexism, wrongly learning that male job seekers were better suited for newly opened positions.
  • Racist facial recognition: Alphabet, widely known for its search engine company Google, is one of the most powerful companies on earth, but also came into the spotlight in May 2015.

Mr Alcine tweeted Google about the fact its app had misclassified his photo.

The brand was under fire after its Photo App mislabelled a user´s picture. Jacky Alcine, a black Web developer, tweeted about the offensive incorrect tag, attaching the picture of himself and a friend who had both been labelled as “gorillas” . This event quickly went viral.

 

  • Unfair decision-making in Court: In July 2016, the Wisconsin Supreme Court ruled that AI-calculated risk scores can be considered by judges during sentencing. COMPAS, a system built for augmented decision-making, is based on a complex regression model that tries to predict whether or not a perpetrator is likely to reoffend. The model predicted double the number of false positives for reoffending for African American ethnicities than for Caucasian ethnicities, most likely due to the historical data used to train the model. If the model had been well adjusted at the beginning, it could have worked to reduce unfair incarceration of African Americans rather than increasing it. Also in 2016, an investigation run by ProPublica found that there were some other algorithms used in US courts that tended to incorrectly dispense harsher penalties to black defendants than white ones based on predictions provided by ML models. These models were used to score the likelihood of these same people committing future felonies. Results from these risk assessments are provided to judges in the form of predictive scores during the criminal sentencing phase to make decisions about who is set free at each stage of the justice system, when assigning bail amounts or when taking fundamental decisions about imprisonment or freedom.
  • Apple´s Credit Card. Launched in August 2019, this product quickly ran into problems as users noticed that it seemed to offer lower credit to women. Even more astonishing was that no one from Apple was able to detail why the algorithm was providing this output. Investigations showed that the algorithm did not even use gender as an input, so how could it be discriminating without knowing which users were women and which were men? It is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. A “gender-blinded” algorithm may be biased against women because it may be drawing data inputs that originally correlated with gender. Moreover, “forcing” blindness to a critical variable such as gender only makes it more difficult to identify and prevent biases on those variables.
  • Most recently, mainly around 2020, AI-enhanced video surveillance has raised some of the same issues that we have just read about such as a lack of transparency, paired with the potential to worsen existing racial disparities. Technology enables society to monitor and “police” people in real time, making predictions about individuals based on their movements, emotions, skin colour, clothing, voice, and other parameters. However, if this technology is not tweaked to perfection, false or inaccurate analytics can lead to people being falsely identified, incorrectly perceived as a threat and therefore hassled, blacklisted, or even sent to jail. This example became particularly relevant during the turmoil caused by the Black Lives Matter riots and the largest tech firms quickly took action: IBM ended all facial recognition programs to focus on racial equity in policing and law enforcement and Amazon suspended active contracts for a year to reassess the usage and accuracy of their biometric technology to better govern the ethical use of their facial recognition systems.

All these are examples of what should never happen. Humans can certainly benefit from AI, but we need to pay attention to all the implications around the advancements of technology.

Transparency vs effective decision-making: The appropiate trade-off

For high volume, relatively “benign” decision-making applications, such as a TV series recommendation in an Over-The-Top streaming platform, a “black box” model may be seem valid. For critical decision-making models that relate to mortgages, work requests or a trial resolution, black boxes are not an acceptable option.

After reading in the previous 5 examples where AI is ineffectively used to support decisions on who gets a job interview, who is granted parole, and even for making life-or-death decisions, it is clear that there’s a growing need to ensure that interpretability, explainability and transparency aspects are addressed thoroughly. This being said, “Failed algorithmia” does not imply that humans should not strive to automate or augment their intelligence and decision-making, but that it must be done carefully by following clever and strict development guidelines.

AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too. AI systems should be deemed trustworthy, relate to human empowerment, technical robustness, accountability, safety, privacy, governance, transparency, diversity, fairness, non-discrimination and societal and enviromental well-being.

“AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too.”

This responsibility also applies to C-level leaders and top executives. Global organisations aren’t leading by example yet and still show no willingness or need to expose their models’ reasoning or to establish boundaries for algorithmic bias . All sorts of mathematical models are still being used by tech companies that aren’t transparent enough about how they operate, probably because even those data and AI specialists who know their algorithms are at a risk of bias are still keen to achieve their end goal rather than taking it out.

So, what can be done about all this?

There are some data science tools, best practices, and tech tips that we follow and use at Bedrock.

I will be talking about all this in the second part of this article as well as about the need for guidelines and legal boundaries in the Data Science & AI field.

Aug 19 — 2021

The opportunity to apply responsible AI (Part 2): Guidelines, Data Science tools, legal initiatives, and tips.

Intro

In the first part of this article we discussed the potential harm and risks of some Artificial Intelligence applications that have demonstrated immense potential across many industries. We concluded that the ability to explain each algorithm’s behaviour and its decision-making pattern is now key when it comes to assessing the effectiveness of AI-powered systems.

In this second part we will be providing some tips, tools and techniques to tackle this challenge. Likewise, we will be commenting on promising initiatives that are happening in the EU and worldwide around responsible AI. Lastly, we will comment on how responsible AI is an opportunity rather than a burden for organisations.

 

Technical guidelines and best practices

As professionals that operate in this field and that can be held accountable for what we develop, we should always ask ourselves two key questions:

  1. What does it take for this algorithm to work?
  2. How could this algorithm fail, and for whom?

Moreover, those developing the algorithms should ensure the data used to train the model is bias-free, and not leaking any of their own biases either. Here are a couple of tips to minimise bias:

  • Any datasets used must represent the ideal state and not the current one, as randomly sampled data may have biases since we live in an unfair way. Therefore, we must proactively ensure that data used represents everyone equally.
  • The evaluation phase should include a thorough “testing stage” by social groups, filtering these groups by gender, age, ethnicity, income, etc. when population samples are included in the development of the model or when the outcome may affect people.

 

What tools Data Scientists have

There are tools and techniques that professionals from our field use when they need to explain complex ML models.

  • SHAP (SHapley Additive exPlanation): Its technical definition is based on the Shapley value, which is the average marginal contribution of a feature value over all possible coalitions. In plain English: It works by considering all possible predictions by using all possible combinations of inputs and by breaking down the final prediction into the contribution of each attribute.
  • IBM’s AIX360 or AI Fairness 360: An open-source library that provides one of the most complete stacks to simplify the interpretability of machine learning programs and allows the sharing of the reasoning of models on different dimensions of explanations along with standard explainability metrics. It was developed by IBM Research to examine, report, and mitigate discrimination across the full AI application lifecycle. It is likely that we will see some of the ideas behind this toolkit being incorporated into mainstream deep learning frameworks and platforms.
  • What-IF-TOOL: A platform to visually probe the behaviour of trained machine learning models with minimal coding requirements.
  • DEON: A relatively simple ethics checklist for responsible data science.
  • Model Cards: Proposed by Google Research provides confirmation that the intent of a given model matches its original use case. Model Cards can help stakeholders to understand conditions under which the analytical model is safe and also safe to implement.

 

The AI greenfield requires strict boundaries

AI represents a huge opportunity for society and corporations, but the modelling processes should be regulated to ensure that new applications and analytical mechanisms always ease and improve everyone’s life. There is not any legal framework that helps to tackle this major issue, that sets boundaries and/or that provides bespoke guidelines. Likewise, there is not any international consensus that allows consistent ruling, audit or review of what is right and wrong in AI. In fact there is not even national consensus within countries.

Specific frameworks such as The Illinois’ Biometric Information Privacy Act (BIPA) in the US are a good start. The BIPA has been a necessary pain for tech giants as it forbids the annotation of biometric data like facial recognition images, iris scans or fingerprints without explicit consent.

There are ambitious initiatives such as OdiseIA that shed some light on what to do across industries and aim to build a plan to measure the social and ethical impact of AI. But this is not nearly enough because of the immediate need of international institutions to establish global consistency. If a predictive model recommends rejecting a mortgage, can the responsible data science and engineering team detail the logical process and explain to a regulator why it was rejected? Can the leading data scientist prove that the model is reliable within a given acceptable range of fairness? Can they prove that the algorithm is not biased?

The AI development process must be somehow regulated, establishing global best-practices as well as a mandatory legal framework around this science. Regulating the modelling process can mean several things: from hiring an internal compliance team that supports data and AI specialists to outsourcing some sort of audit for every algorithm created or implemented.

AI could be regulated in the same way The European Medicines Agency (EMA) in the EU follows specific protocols to ensure the safety, efficacy and adversarial effects for drugs.

 

Emerging legal initiatives: Europe leading the way

On 8th April 2019 the EU High Level Expert Group on Artificial Intelligence proactively set the Ethics Guidelines for Trustworthy AI that were applicable to model development. They established that AI should always be designed to be:

  1. Lawful: Respecting applicable laws and regulations.
  2. Ethical: Respecting human ethical principles.
  3. Robust: Both from a technical and sustainable perspective

The Algorithmic Accountability Act in the USA that dates from November 2019 is another example of a legal initiative that also aimed to set a framework for the development of algorithmic decision-making systems and has also served as a reference to other countries, public institutions and governments.

Fast forward to the present day, the European Commission proposed on 21st April 2021 new rules and actions with the ambition of turning Europe into the global hub for trustworthy AI by combining the first-ever legal framework on AI and a new Coordinated Plan with Member States. This new plan aims to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across Europe. New rules will be applied in the same way across all European countries following a risk-based approach, and an Intelligence Board will facilitate implementation and drive the development of AI standards.

 

The opportunity in regulation

Governance in AI, such as that which the EU is driving, should not be considered as an evil. If performed accurately, AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition. Moreover, governance would allow us to legally frame the boundaries on acceptable risks and benefits of AI monetisation while ensuring that any project is planned for success.

“AI regulation will level the playing field, will create a sense of certainty, will establish and strengthen trust and will promote competition.”

Regulation actually opens a new market for consultancies that help other companies and organisations manage and audit algorithmic risks. Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in dozens of contexts, heads the Online Risk Consulting & Algorithmic Auditing (ORCAA), a company that was set up to help companies identify and correct any potential biases in the algorithms they use.

Counting on a potential international legislator or auditor would also allow those that achieve the “Audited player label” to project a positive brand image while remaining competitive. Using an analogy that relates to drug development: Modern society relies on medicines prescribed by doctors because there is an inherited trust in their qualifications, and because doctors believe in the compulsory clinical trial processes that each drug goes through before hitting the market.

 

Final thoughts

Simply put, AI has no future without us humans. Systems collecting the data typically have no way to validate the data they collect and in which context the data is recorded and collected. Data has no intuition, strategic thinking or instincts. Technological advancements are shaping the evolution of our society, but each and every one of us is responsible for paying close attention to how AI, as one of these main advancements, is used for the benefit of the greater good.

If you and your organisation want to be ahead of the game, don’t wait for regulation to come to you, but take proactive steps prior to any imposed regulatory shifts:

  • It must be well understood that data is everything. Scientists strive to ensure the quality of any data set used to validate a hypothesis and go to great lengths to eliminate unknown factors that could alter their experiments. Controlled environments are the essence of well-designed analytical modelling.
  • Design, adapt and improve your processes to learn how to establish an internal “auditing” framework. Something like a minimal viable checklist that allows your team to work on fair AI while others are still trying to squeeze an extra 1% accuracy from a ML model. Being exposed to the risk of deploying a biased algorithm that may potentially harm your customers, scientific reputation and your P&L is not appealing.
  • Design and build repositories to document all newly created governance and regulatory internal processes so that all work is accessible and can be fully disclosed to auditors or regulators when needed, increasing external trust and loyalty to your scientific work.
  • Maintaining diverse teams, both in terms of backgrounds, demographics and in terms of skills is important for avoiding unwanted bias. While in the STEM world women and people of colour remain under-represented, these may be the first people to notice these issues if they are part of the core modelling and development team.
  • Be a promoter and activist for change in the field. Ensure that your communications team and technical leaders take part in AI ethics associations or debates of the like. This will allow your organisation to be rightly considered a force for change.

All these are AI strategic mechanisms that we use at Bedrock and that allow the legal and fair utilisation of data. The greatest risk for you and your business not only lies in ignoring the potential of AI, but also in not knowing how to navigate AI with fairness, transparency, interpretability and explainability.

Responsible AI in the form of internal control, governance and regulation should not be perceived as a technical process gateway or as a burden on your board of directors, but as a potential competitive advantage, representing a value-added investment that still is unknown for many. An organisation that successfully acts on its commitment to ethical AI is poised to become a thought leader in this field.

Jul 23 — 2021

How using adaptive methods can help your network perform better

How using adaptive methods can help your network perform better

Intro

Introduction

An Artificial Neural Network (ANN) is a statistical learning algorithm that is framed in the context of supervised learning and Artificial Intelligence. It is composed of a group of highly connected nodes called neurons that connect an input layer (input), and an output layer (output). In addition, there may be several hidden layers between the previous two, a situation known as deep learning.

Algorithms like ANNs are everywhere in modern life, helping to optimise lots of different processes and make good business decisions. If you want to read a more detailed introduction to Neural Network algorithms, check out our previous article, but if you’re feeling brave enough to get your hands dirty with mathematical details about ways to optimise them, you’re in the right place!

Optimisation techniques: Adaptive methods

When we train an artificial neural network, what we are basically doing is solving an optimisation problem. A well optimised machine learning algorithm is a powerful tool, it can achieve better accuracy while also saving time and resources. But, if we neglect the optimisation process, we can cause very negative consequences. For instance, the algorithm might seem perfect during the tests and fail resoundingly in the real world, or we could have incorrect underlying assumptions about our data and amplify them when we implement the model. For this reason, it is extremely important to spend time and effort optimising a machine learning algorithm and, especially, a neural network.

The objective function that we want to optimise, – in particular, minimise-, in this case is the cost function or loss function J, which depends on the weights \omega of the network. The value of this function is the one that informs us of our network’s performance, that is, how well it solves the regression problem or classification that we are dealing with. Since a good model will make as few errors as possible, we want the cost function to reach its minimum possible value.

If you have ever read about neural networks, you will be familiar with the classic minimisation algorithm: the gradient descent. In essence, gradient descent is a way to minimise an objective function – J(\omega) in our case – by updating its parameters in the opposite direction of the gradient of the objective function with respect to these parameters.

Unlike other simpler optimisation problems, the J function can depend on millions of parameters and their minimisation is not trivial. During the optimisation process for our neural network, it is common to encounter some difficulties like overfitting or underfitting, choosing the right moment to stop the training process, getting stuck in local minima or saddle points or having a pathological curvature situation. In this article we will explore some techniques to solve these two last problems.

Neural networks

Remember that gradient descent updates the weights \omega of the network in a step t + 1 th as follows

mechanisms,concluding what happened and what is likely to happen next.

In order to avoid these problems, we can input some variations in this formula. For instance, we could alter the learning rate \alpha, modify the component relative to the gradient or even modify both terms. There are many different variations that modify the previous equation, trying to adapt it to the specific problem in which they are applied; this is the reason why these are called adaptive methods.

Let’s take a closer look at some of the most commonly used techniques:

  1. Adaptive learning rate

The learning rate \alpha is the network´s hyperparameter that controls how much the model must change, based on the cost function value, each time the weights are updated; it dictates how quickly the model adapts to the problem. As we mentioned earlier, choosing this value is not trivial. If \alpha is too small, the training stage takes longer and the process may not even converge, while if it is too large, the algorithm will oscillate and may diverge.

Although the common approach taking \alpha = 0.01 provides good results, it has been shown that the training process improves when \alpha stops being constant and starts depending on the iteration “t”. Below are three options that rephrase \alpha’s expression:

Exponential decay

Inverse decay

Potential decay

The constant parameter “k” controls how \alpha_t decreases and it is usually set by trial and error. In order to choose the initial value of \alpha, \alpha_0, there are also known techniques, but they are beyond the scope of this article.

Another simpler approach that is often used to adapt \alpha consists in reducing it by a constant factor every certain number of epochs – training cycles through the full training dataset -. For example, dividing it by two every ten epochs. Lastly, the option proposed in [1] is shown below,

where \alpha is kept constant during the first \tau iterations and then decreases with each iteration t.
  1. Adaptive optimisers
  • Momentum
We have seen that when we have a pathological curvature situation, the descent of the gradient has problems in the ravines [Image 2],  , in the parts of the area where the curvature of the cost function is much greater along one dimension than the others. In this scenario, the gradient descent oscillates between the ridges of the ravine and progresses more slowly towards the optimum. To avoid this, we could use optimisation methods such as Newton’s known method, but this may significantly raise the computational power requirements since it would have to evaluate the Hessian matrix of the cost function for thousands of parameters. The momentum technique was developed to dampen these oscillations and accelerate convergence of the training. Instead of only considering the value of the gradient at each step, this technique accumulates information about the gradient in previous steps to determine the direction in which to advance. The algorithm is set as follows:

where \beta \in [0,1] and m_0 is equal to zero.

If we set \beta = 0 in the previous equation, we see that we recover the plain gradient descent algorithm!

As we perform more iterations, the information of gradients from older stages has a lower associated weight; we are making an exponential moving average of the value of the weights! This technique is more efficient than the simple moving average since it quickly adapts the value of the prediction of fluctuations in recent data.

 

  • RMSProp

The Root Mean Square Propagation technique, better known as RMSProp, also deals with accelerating convergence to a minimum, but in a different way from Momentum. In this case we do not adapt the gradient term explicitly:

We have now introduced “v_t” as the exponential moving average of the square of gradients. As an initial value it’s common to take v = 0 and the constant parameters equal to \beta = 0.9 and \epsilon = 10^{-7}.

Let’s imagine that we are stuck at a local minimum and the values of the gradient are close to zero. In order to get out of this “minimum zone” we would need to accelerate the oscillations by increasing \alpha. Reciprocally, if the value of the gradient is large, this means that we are at a point with a lot of curvature, so in order to not exceed the minimum, we then want to decrease the size of the step. By dividing \alpha by that factor we are able to incorporate information about the gradient in previous steps and increase \alpha when the magnitude of the gradients is small.

 

  • ADAM

The AdaptativeMomentOptimization algorithm, better known as ADAM, combines the ideas of the two previous optimisers above.

\beta_1 corresponds to the parameter of the Momentum and \beta_2 to the RMSProp.

We are adding two additional hyperparameters to optimise in addition to \alpha, so some might find this formulation counterproductive, but it is a price to be paid if we aim to accelerate the training process. Generally, the values taken by default are \beta_1 = 0.9, \beta_2 = 0.99 and \epsilon = 10^{-7}.

It has been empirically shown that this optimiser can converge faster to the minimum than other famous techniques like the Stochastic Gradient Descent.

Lastly, it is worth noting that it is common to make a bias correction in ADAM’s equation: This is because at the first stages ​​we would not have much available data from previous ones, and then the formula would be reformulated with

\beta_1 corresponds to the parameter of the Momentum and \beta_2 to the RMSProp.

We are adding two additional hyperparameters to optimise in addition to \alpha, so some might find this formulation counterproductive, but it is a price to be paid if we aim to accelerate the training process. Generally, the values taken by default are \beta_1 = 0.9, \beta_2 = 0.99 and \epsilon = 10^{-7}.

It has been empirically shown that this optimiser can converge faster to the minimum than other famous techniques like the Stochastic Gradient Descent.

Lastly, it is worth noting that it is common to make a bias correction in ADAM’s equation: This is because at the first stages ​​we would not have much available data from previous ones, and then the formula would be reformulated with

Conclusions

In summary, the goal of this article is to introduce some of the problems that may arise when we wish to optimise a neural network and the most well-known adaptive techniques to tackle them. We’ve seen that the combination of a dynamic alpha with an adaptive optimiser can help the network learn much faster and perform better. We should remember, however, that Data Science is a field in constant evolution and while you were reading this article, a new paper may have been published trying to prove how a new optimiser can perform a thousand times better than all the ones mentioned here!

In future articles we will look at how to tackle the dreaded problem of an overfitting model and the vanishing gradient. Until then, if you need to optimise a neural network, don’t settle for the default configuration, use these examples to try to adapt it to your specific real problem or business application 🙂

REFERENCIAS:

[1] Bengio, Y. 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures. arXiv:1206.5533v2.

[2] Intro to optimization in deep learning: Momentum, RMSProp and Adam – Ayoosh Kathuria https://blog.paperspace.com/intro-to-optimization-momentum-rmsprop-adam/

[3] Kingma, Diederik and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv

preprint arXiv:1412.6980.

[4] Zhen Xu, Andrew M. Dai, Jonas Kemp, Luke Metz. 2019. Learning an Adaptive Learning Rate Schedule. arXiv:1909.09712v1.

Jan 8 — 2021

Trends in Data Science & AI that will shape corporate strategy and business plans

data science trends 2021

Intro

In response to the most atypical year that many of us may have probably lived, leading companies have wisely reconsidered where to place their money. Inevitably, the pandemic has noticeably sped up the adoption of Artificial Intelligence (AI) and has also motivated business leaders to accelerate innovation in the pursuit of new routes to generate revenue and with the aim of outrunning the competition.

 

Companies and clients entering a digital transformation and wanting to become data-driven can be overwhelmed by the sheer amount of technological solutions, tools and providers. Keeping up to date with the trends in the discipline may help. So without further ado, let’s take a look at what to watch for in the year ahead with regard to practices, talent, culture, methodologies, and ethics involving business strategy and organisational development.

 

From Data to AI literacy

The structural foundations of a company are its people, so ensuring that your workforce is ready to embrace change, because they understand what change means, is still the first step to take in the AI journey. Data literacy was a trend in late 2019 and throughout 2020, now together with AI literacy both will continue to be a trend for the next two years.

If you think about it, Computer literacy is now a commodity that allows us to engage with society, i.e. finding a job, ordering food, etc, in ways previously unimaginable. Similarly, AI literacy is becoming increasingly important as AI systems become more integrated into our daily lives and even into our personal devices. Learning how to interact with AI-based systems that are fuelled with data provides more options in the consumption and use of technology and will pave the way for successful AI adoption in the corporate world.

In order to have AI literacy we must first have data literacy. To understand what an AI algorithm does, you must firstly know which data you have at hand, understand its meaning and where it’s sourced and how it is produced, and only then can you know how to extract its value using AI. Therefore, investments in educational workshops, seminars and similar exercises for preparing all business units to understand how Data Science and AI will impact their lives and work is instrumental, especially because of the huge amount of preconceived notions that are floating about out there.

From abstract to actionable. CDO´s and human-centered methodologies.

AI has been on everyone’s minds as the next major step for humanity, but companies that have previously struggled to find a measurable return on their investment will now be taking a more pragmatic approach , making special adjustments right from the start. These adjustments may be structuring and reshaping the whole organisation so that AI is welcomed and supported appropriately and then correctly embedded across all business units.  One of the advantages of practical AI is that it can achieve ROI in real time, so this could be the year many organisations see their AI efforts begin to really pay off.

So far, statistics have shown that the vast majority of data projects never get completed or just fail to deliver, so the use of human-centered and Design Thinking approaches are relevant when trying to put strategy and ideas into action and to really know where to start and create impact.

Data scientists and engineers that have been working in silos will now be placed transversely across the whole organisation. We will see an empowerment of the Chief Data Officer role and its vertical to support initiatives that affect all organisational layers and this will ensure that any Machine Learning or Robotic Process Automation projects are aligned with the overall business strategy, creating a quick quantitative and qualitative improvement of the existing  operations. In turn, data professionals will need to quickly adapt by developing their soft skills, such as communication and business acumen, otherwise there is a good chance the clash between data professionals and business executives will continue, ultimately resulting in AI investments not paying off.

Growing lack of specialised talent

It is now difficult for a company to attract talent in this field, the demand for data professionals and AI specialists vastly exceeds the academic supply and the majority of companies currently lack the technical talent required to build scalable AI solutions. As a result salaries are going up which in turn leads to the majority of companies not being able to afford to build an internal data science team.

On the other hand, online learning portals have been providing courses and certifications that allow professionals to get up to speed, but those courses alone don’t teach everything that you need for the job; they are only a complement to other forms of training and hands-on projects. Therefore, senior specialists will still be a scarce resource and the current situation will not improve, but get worse.

One of the viable solutions to overcome this hurdle may be to provide access to self-service platforms i.e. automated machine learning tools as a way to optimise all processes that currently require highly specialised roles. This takes us to the next trend.

Self-service Data Science and AI

Taking into account the rising demand for data professionals, organisations who are unable to hire are facing the risk of being left behind.  As a consequence, a growing number of companies are turning to no-code or AutoML platforms that would help to assist throughout the complete data science workflow, from the raw dataset preparation to the deployment of the machine learning model. The underlying objective of these “self-service business models” is to harness the commercial opportunity that the growing lack of talent scenario presents.

With the rise of no-code or low-code AI platforms, we could start to wonder: Will the job of a data scientist, data engineer or data analyst disappear or will it just evolve? In my view, although a growing number of tools rightfully promise to make the field of data science more accessible, the average Joe will not be able to make the most out of these tools and projects will be very technically limited. There are solutions providing users with attractive user interfaces and a lot of prebuilt components that can ease Machine Learning developments, but I would dare to say that we are still 4 to 5 years away from a massive democratisation of the Data Science practice ( BI took more than 15 years as a reference) I am, however,  pretty convinced that in 2021 we will see many more self-service solutions both offered and implemented.

Automated Data Preprocessing

Inherently related to the previous trend, many Data Scientists have historically agreed that one of the most tedious and complex steps is preparing data sets to be analysed or used in order to develop, train and validate models.

Feature engineering for a Machine Learning algorithm can be more entertaining, Dimensionality Reduction in the form of Principal Component Analysis can be challenging but transforming and cleansing data sets is overwhelming and is certainly time consuming. New Python libraries and packages are emerging where the preprocessing step is automated, saving up to 80% of the time that is currently spent on early project stages. The trade-off, or even drawback, may be that data scientists would be unaware of how the resulting data sets’ features were transformed and some specific pieces of knowledge could be lost along the way.

Nevertheless, even if the preprocessing of data is automated, some data engineering tasks will still be performed manually such as moving data from different silos to unified data warehouses. This task may be the most time consuming part of the process; and it will be very hard to automate because it is case-specific.

IT ( CIOs) vouching hard for AI

In 2021 we would expect organisations to start to see the benefits of executing their AI and ML models at a global scale, not only getting them into production for some “local” or specific use cases, but also pushing them horizontally. IT will not and cannot continue to be a “bureaucracy and technical requirements gateway” for AI projects and in 2021 IT will have to keep evolving to be an innovation hub and a “visionary instrument” for businesses. CIOs around the globe will push for AI to be embedded across the whole organisation and the spin-off of the CDO office from the traditional CIO vertical will now speak volumes about a companies´ digital maturity and about the progress of their digital transformation journey.

Explainable and Ethical Data Science and AI

The more data science and analytics are central to the business, the more data we retrieve and merge, the higher the risk of violating customer privacy. Back in 2018 the big shift was GDPR, followed by browsers taking a stand on data privacy and now Google is planning to deprecate third-party cookies by 2022. In 2021 the ethics and operational standards behind analytical and predictive models will come into focus so that any AI mechanisms are clear from biases.

On one hand, algorithm fairness and on the other, the transparency and quality of the data sets that are used to train and validate these algorithms are two of the issues in the spotlight while companies will not be able to afford “black boxes” anymore. Leaders will be proactively managing data privacy, security and ethical use of data and analytics. It’s not only the right thing to do and an increasing legal requirement, but an essential practice to gain trust and credibility both when used inhouse to make decisions, and when used to outsmart competitors.

Augmented Intelligence going mainstream

The term data-driven has been in the mouths of many, but in reality only a small percentage have put it into practice. The maturity of the data technology and the expertise of data professionals will now enable the decision making process, at any level in the company, to be almost fully automated and data-driven. Note the word “almost”, as output from models can complement human thinking, but not completely overrule it as analytics are not perfect either i.e. variables could be missing or data may be biased from the source.

Augmented Intelligence, also known as Machine Augmented Intelligence, Intelligence Amplification and Cognitive Augmentation, has referred, since 1950, to the effective use of IT mechanisms for augmenting human intelligence. Corporations will now be building up their Augmented Intelligence capabilities where human thinking, emotions and subjectivity are all combined and strengthened with AI´s ability to process huge amounts of data, allowing these Corporations to make informed decisions and to plan and forecast accurately.

Again, this does not mean that the AI algorithm will dictate and tell C-level executives how to run their business, but will certainly provide them with the best guidance available by providing possible outputs with the data that is fed to it. 

Affordable modelling of unstructured datasets

Natural Language Processing, Computer Vision and other forms of unstructured data processing are being improved day by day. In addition to the effort of hundreds and maybe thousands of experts  in refining these AI models,  the increase in remote working will drive the greater adoption of technologies that embed NLP, Automated Speech Recognition (ASR) and other Capabilities of the like. Computing platforms have made it possible with tools like Google Natural Language API for every company to be able to use a Deep Learning NLP without needing to train the model locally. This affordable modelling could also coin the name of AI as a Service soon. The advancements in this field will allow small and medium businesses to process data in unstructured formats because of the accessibility to validated algorithms paired with affordable cloud processing power.

Data Storytelling and Artistic Data Viz go mainstream hand-in-hand

Any form of advanced analytics, whether it is descriptive, predictive or prescriptive does not make a lasting impact and cannot reach its full potential if insights are not communicated properly. Representing data in appealing visual ways while surrounding the numerical findings with the proper narrative and storytelling elements is now the recipe for success in the data science realm. The algorithm selection and the data set where the model was trained is undoubtedly critical, but presenting the findings and conclusions of “why something happened”, “how it could have happened” and “what could we have done about it” have never been as important as today, due to the complexity of the analytics beneath the surface.

Conclusions

Companies and organisations rely on data to drive their innovation agenda. However, business leaders still face significant challenges to make the most out of their investment in an immature data-driven culture.  Data as an asset, Data Science as a tool and Artificial Intelligence as a discipline will encompass the next revolution for humans and we are lucky to be present.

Recapping, we should expect 10 main trends in Data Science, Data Analytics and the Artificial Intelligence space in 2021:

  1. Data Science and AI literacy will continue to be a trend because humans remain at the centre.
  2. Artificial Intelligence moves from abstract to actionable and the CDO role will gain importance.
  3. The lack of specialised talent will not cease to grow.
  4. Self-service solutions. Many autoML solutions will thrive during the next few years, empowering non-technical users to be rookie data scientists. The self-service option will contribute to widespread adoption, but AI and Data Science consultants will still be critical to drive these initiatives both on a strategic and hands-on level.
  5. Automated Preprocessing of data could soon be feasible, allowing Data Scientists and Data engineers to focus on what really adds value to the business.
  6. IT is pushing for AI harder than ever before.
  7. Explainable, transparent and ethical data management will be on top of all agendas. The value derived from a predictive analytics project will not justify the means to an end.
  8. Augmented Intelligence will allow companies to outrun competition.
  9. Affordable modelling of unstructured datasets will result in a massive adoption of cutting-edge AI solutions.
  10. Data Storytelling and Dataviz will not be the icing on the cake, but the key ingredient in the data science recipe

The next couple of years will surely show a shift in AI from being an emerging technology to a widespread adoption.

Dec 11 — 2020

Omitted Variable Bias in Machine Learning models for marketing and how to avoid it

Intro

This isn’t a highly technical article explaining the maths of Omitted Variable Bias (OVB), there are plenty of brave individuals who have already taken this approach, and theirs can be read here (1) or here (2). Instead, this is an article discussing what OVB is in plain English and its implications for the world of marketing and data.

Let’s start with the basics: what is OVB? We could define it technically:

When doing regression analysis while omitting variables that affect the relationship between the dependent variable and included explanatory variables, researchers don’t get the true relationship. Therefore, the regression coefficients are hopelessly biased, and all statistics are inaccurate. (2)

(6)

Instead, we’re going to explain it more simply: if you developed a model that makes predictions considering some relevant factors, but not all relevant factors, then the predictions will never be entirely reliable, because you cannot make an accurate prediction if you don’t have access to all the relevant information.

It’s like trying to predict the temperature just by looking out of the window, with no more information than what you see: sometimes your prediction will be right; that sunny implies hot, but a significant number of times you will be wrong. For instance, if you predict that it’s hot just because it’s sunny in the middle of the winter; or if you do it during the summer in Utqiagvik, in the north of Alaska (3). So, in this imaginary scenario, there would be at least two extra variables that we should be considering: location and season.

Another interesting example that a marketer could relate to:

Let’s imagine that last spring, a bathing suits brand saw that sales were really low and decided to change their media/creative agency. The new media/creative agency starts collaborating with them, and right at the moment their first campaign airs, sales spike. The brand is really impressed with their new agency’s performance, and decides to extend the contract for 3 years.

A few months later, they analyse the data in greater detail, and realise that during spring they had their highest market share in the history of the brand, and that it kept improving during the season. When the new agency started, their share lowered, and it is now back to the levels it was one year ago.

How could this have happened? Because they were omitting the most important variable in their sales: the weather. It had been awful during spring, and right when their new campaign aired summer was starting. There was good weather for the first time that year, so everybody was running out to buy a bathing suit, which they hadn’t done before because they wouldn’t have been able to use it. In their hasty decision, maybe made by people who didn’t even live in the country where this happened, they had completely missed this. They let a media agency go that was actually giving them better results than the new one, with whom they now had a 3 year contract, causing them significant revenue loss.

These were simple imaginary scenarios, which I hope have convinced you that Omitted Variable Bias isn’t just some “mathematical thing”, but a real-world challenge to which companies should pay close attention if they want to make effective decisions. In the earlier examples in this article the missing variable was obvious, but sometimes it’s not so easy. How can we ideate a robust model where all (or as many as possible) relevant variables are taken into account?

(7)

 

By doing a thorough data discovery (4) process in which all stakeholders are on board and all processes are mapped. Before chasing any machine learning application, we must find out what the relevant variables might be by:

  • Looking into our customer’s journey, and analysing their interactions and behaviour through each stage, with the help of the stakeholders who are there along the way: both internal and external (even customers). This is proper Journey Analytics.
  • Considering the 8 Ps of marketing: Product, Place, Price, Promotion, People, Processes, Physical evidence, Productivity & quality (5), and assessing how each of them might be relevant as input for predictive modelling.

Once this is done, we will end up with a comprehensive list of all the critical variables, and we can start designing and building a data warehouse, if there isn’t one already, and then, finally, start building the model. During this process we must not forget about everything we have worked on before. Instead, it’s the moment at which, by exploring the data and the model’s results, we can find out if the model’s outputs are fully predicted by the variables, if not then we are still missing something. We can then deploy an imperfect model (if it’s good enough), prototyping quickly, and following with various iterations to refine it, or go back to earlier stages of ideation.

In a nutshell, for building a model that resembles reality we must first identify the right input, and for that we need to involve every stakeholder in the ideation process. Not doing so could lead to incomplete models that, instead of assisting us in decision making, misguide us. Developing and relying on a more complete and accurate data model will lead us to make more effective and powerful data-driven decisions, that in the end will help us attain our main goals: more customers, and more satisfied customers.

Sources:

(1) https://www.econometrics-with-r.org/6-1-omitted-variable-bias.html

(2) https://www.hindawi.com/journals/ads/2012/728980/

(3) https://en.wikipedia.org/wiki/Utqiagvik,_Alaska

(4) https://bi-survey.com/data-discovery

(5) https://www.professionalacademy.com/blogs-and-advice/marketing-theories—the-marketing-mix—from-4-p-s-to-7-p-s

(6) https://sites.google.com/site/modernprogramevaluation/variance-and-bias

(7) https://www.juancmejia.com/y-bloggers-invitados/mega-guia-de-descubrimiento-de-datos-data-discovery-que-es-beneficios-y-mejores-practicas/

Dec 1 — 2020

A short introduction to Neural Networks and Deep Learning

neural network

Introduction

In this article I attempt to provide an easy-to-digest explanation of what Deep Learning is and how it works, starting with an overview of the enabling technology; Artificial Neural Networks. As this is not an in-depth technical article, please take it as a starting point to get familiar with some basic concepts and terms. I will leave some links along the way, for curious readers to investigate further.

I am working as a Data Engineer at Bedrock, and my interest in the topic arose due to my daily exposure to doses of Machine Learning radiation emitted by the wild bunch of Mathematicians and Engineers sitting around me.

Deep Learning roots

The observation of nature has triggered many important innovations. One with profound socioeconomic consequences arose in the attempt to mimic the human brain. Although far from understanding its inner workings, a structure of interconnected specialised cells exchanging electrochemical signals was observed. Some imitation attempts were made until finally Frank Rosenblatt came out with an improved mathematical model of such cells, the Perceptron (1958).

The Perceptron

Today’s Perceptron, at times generalised as the ‘neuron’, ‘node’ or ‘unit’ in the context of Artificial Neural Networks, can be visually described as below:

- the Perceptron -

It operates in the following manner: every input variable is multiplied by its weight, and all of them, together with another special input named ‘bias’, are added together. This result is passed to the ‘activation function’, which finally provides the numerical output response (‘neuron activation’). The weights are a measure of how much an input affects the neuron, and they represent the main ‘knobs’ we have at our disposal to tune the behaviour of the neuron. The Perceptron is the basic building block of Artificial Neural Networks.

Deep Neural Networks (DNN)

Deep Neural Networks are the combination of inputs and outputs of multiple different Perceptrons on a grand scale, where there may be a large number of inputs, outputs and neurons, with some variations in the topology, like the addition of loops, and optimisation techniques around it, as you can see in the picture below:

- Multi-layer Perceptron or Feedforward neural network -

We can have as many inputs, outputs, and layers in between as needed. These kinds of networks are called ‘feedforward-networks’, due to the direction of data flowing from input to output.

  • The leftmost layer of input values in the picture (in blue) is called ‘input layer’ (with up to millions of inputs).
  • The rightmost layer of output perceptrons (in yellow) is called the ‘output layer’ (there can be thousands of outputs). The green cells represent the output value.
  • The layers of perceptrons in between (in red) are called ‘hidden layers’ (there can be up to hundreds of hidden layers, with thousands of neurons).

The word ‘deep’ refers to this layered structure. Although there is not total agreement on the naming, in general, we can start to talk about Deep Neural Networks, once there are more than 2 hidden layers.

To get an idea of what I’m talking about, a 1024×1024 pixels image, with 1000 nodes in the 1st layer, 2 outputs, and 1 bias input will have over 3 billion parameters.

Choosing the right number of layers and nodes is not a trivial task, as it requires experimentation, testing, and experience. We can’t be sure beforehand which combinations will work best. Common DNNs may have between 6 to 8 hidden layers, with each layer containing thousands of perceptrons. Developing these models is therefore not an easy task, so the cost-benefit trade off needs to be evaluated: a simpler model can sometimes provide results that are almost as good, but with much less development time. Also, teams with the skills to develop Neural Networks are not yet commonplace.

Deep Learning (DL)

Deep Learning is a branch of Artificial Intelligence leveraging the architecture of DNNs to resolve regression or classification problems where the amount of data to process is very large.

Suppose we have a set of images of cats and a set of images of dogs, and we want a computer program that is able to label any of those pictures as either a cat picture or a dog picture with the smallest possible error rate, something called an ‘image classification problem’. As a computer image is basically numerical data we can, after applying some transformations, introduce it as input to our network. We configure our network based on the nature of the problem, by selecting an appropriate number of inputs, outputs, and some number of layers and neurons in between. In our case, we want our network to have two outputs, each associated with a category, one representing dogs, and the other one cats. The actual output value will be a numerical estimation representing how much the network ‘thinks’ that the input picture could be either one category or the other:

The outputs are probability values of the image being a dog or a cat (although in many cases they do not necessarily add up to 1). The initial set of weights is randomly chosen, and therefore, the first response of our network to an input image, will also be random. A ‘loss function’ encoding the output error will be calculated based on the difference between the expected outcome and the actual response of the network. Based on the discrepancy reported by the loss function, the weights will be adjusted to get to a closer approximation.

This is an iterative process. You present a batch of data to the input layer, and then the loss function will compare the actual output against the expected one. A special algorithm (backpropagation) will then evaluate how much each connection influenced the error by ‘traversing’ the network backwards to the input layer, and based on that, it will tweak the weights to reduce the error (towards minimising the loss function). This process goes on by passing more images to the network, from the set of images for which we already know the outcome (the training set). In other words, for every output that produces a wrong prediction/estimation, we should reinforce those connections’ weights for the inputs that would have contributed to the correct prediction.

We will use only a fraction of the labelled dataset (training set) for this process, whilst keeping a smaller fraction (test set) to validate the performance of the network after training. This process is the actual network ‘learning’ phase, as the network is somehow building up ‘knowledge’ from the provided data, and not just memorising data. The larger the amount of quality data we feed in, the better the network will perform over new, unseen data. The key point to grasp here is that the network will become able to generalise, i.e. to classify with high accuracy an image that has never seen before.

Why now?

Only in recent years has DL become very popular, despite the fact that most of the foundational work has been around for decades. There has been a lot of friction, caused by technological limitations and other challenges, against the widespread use of DL, with some recent breakthroughs being key to the current adoption of the technology. Just to mention a few factors:

  1. Deep Learning algorithms are data hungry. They only perform well when large labelled datasets are available. Businesses have finally started to give all the deserved importance to serious data collection strategies, which is already paying off and will even more so in the near future. Not using these techniques should no longer be an option. As of 2017, 53% of companies were already adopting big data strategies, and in 2019, 95% of businesses needed to manage big data.
  2. Deep Learning computational requirements are extremely demanding. The time taken to properly train a Neural Network was simply impractical in most cases given the available technology. Now we have efficient distributed systems, GPU architectures, and cloud computing at reasonable prices. Therefore, every business can now rely on on-demand computational power without the burden of having to set up their own infrastructure running the risk of quick obsolescence, and thus are able to exploit DL power at lower cost.
  3. Algorithmic challenges. There were important issues to get the optimisation algorithms to work on more than 2 hidden layers. Thanks to breakthrough ‘discoveries’ like backpropagation, convolution, and other techniques, there has been a way to drastically reduce the amount of ‘brute-force’ computational requirements. Also, thanks to available online content and toolsets like ‘tensorflow’, most people can finally experiment and learn in order to create the most diverse applications out of these techniques.

Use Cases

Deep Learning can be used for Regression and Classification tasks, from small to large scale, although for small scale issues other Machine Learning techniques are more suitable. When larger datasets are involved, together with the necessary computational resources, Deep Learning is probably the most powerful Machine Learning technique. I will list here only a few use cases with common applications:

  • Recommender systems: mostly used in e-commerce applications, to predict ratings or user preferences (the Amazon recommendation engine, for example).
  • Speech synthesis/recognition: used in verbal/oral communication with machines, as an alternative or replacement to more traditional types of human/machine interactions (like Apple’s Siri assistant).
  • Text processing: applications can predict textual outputs based on previous inputs, as in search text completions (Google search bar, for example).
  • Image processing/recognition: used where heavy loads of images (including video) need to be processed, as in computer vision, satellite, medical imagery analysis, object detection, autonomous driving.
  • Game playing: systems that can learn from previous games,and compete against humans (DeepMind, AlphaGo).
  • Robotics: advanced control systems for industrial automation, robots with special physical abilities that could replace human workers in hostile environments.

The good thing is, that you can find most of those applications already at work within your phone!

In the case of games, there was a public challenge between a professional Go player and a Team of experts that developed a Deep Learning application, nicknamed AlphaGo. AlphaGo won the challenge, winning 4 games and losing just 1. It was initially trained from existing Go game datasets generated by communities of online gamers, and with the input of some professional players. From a certain point, AlphaGo was set to learn and improve by playing against itself. Expert players declared that AlphaGo came out with beautiful and creative moves, as they witnessed a machine making moves that no professional would have thought of doing until that moment (Go has a quite long tradition). As an analogy for commercial applications, unexpected business insights may be generated using deep learning techniques that no human could have foreseen or guessed through traditional analytical models or from his own experience.

Other impressive results from DL applications have recently been achieved in the automated text generation with OpenAI GPT-3 new algorithms. This is another special DL network that can work with unlabelled data, as it will automatically detect patterns from very large textual datasets. These networks are able to generate text contents that may often appear as if they were written by humans. Remarkably the entire English Wikipedia apparently makes up less than 1% of the training data used to train GPT-3! You can see GPT-3 at work here:

Considerations

Despite the many practical applications, their usage is still, if not complex, a very tricky one. It is possible to generate a deep learning model with little prior DL knowledge, but in most cases, we’re likely to obtain misleading results. The handling of models running in any critical or sensitive environment, should be left to people with the right technical expertise.

The quality of the data we feed in when training a DNN, is of key importance. It is a common fact that many projects involving DL, despite having very sophisticated models, at times cannot go live simply because real data does not match the model required standards. As the saying goes; ‘garbage in, garbage out’. To make the most out of these analytical methods and architectures, it is critical to implement a strong data culture, establishing robust collection, usability, and compliance strategies, also embedding education and training mechanisms at the core of the business.

There are also ethical issues arising from some biased results generated by DL, and the generation of false propaganda/information (deep fakes).

Also, there is still quite some mystery as to the inner workings of DL, which may open the door to potential issues that may be difficult to detect and avoid. In fact, it is possible to manipulate an image so that a human may not perceive it, whereas a machine might misclassify it completely.

Acquiring a better understanding of the possibilities given by these machine learning algorithms, identifying when DL is really an option to be considered, will surely allow us to set the right goals and expectations.

 

Conclusion

We should not consider DL in any way as related to human intelligence. We are still not even close to such complexity. However, the way we should embrace this branch of Artificial Intelligence is as another, very powerful extension of our capabilities, rather than as a threat to our jobs. Threats come from misuses…but that’s another story. Possibly the most obvious differentiator between humans and any other known form of life, is our ability to build tools, and Deep Neural Networks are among the most promising we dispose of today.


Bedrockdbd

We design & implement human-first,technologically neutral, data-centric ecosystems driven by Science

Nov 13 — 2020

The data in the esports

data in esports

Intro

The interest in esports is growing and this widespread phenomenon massively attracts millions of individuals around the globe every day. What started out as a niche activity has now turned into shows that bring thousands of people together in stadiums and pavilions. All this may be due to the exponential growth of the video game industry, the increasing relevance that different online platforms such as Twitch or YouTube are reaching, or simply because the new generations are more digitally oriented or more open to consume this kind of content.

What exactly are esports?

It is a question that many people still do not know how to answer.

To be considered an esport and not just a video game; players must face and compete with each other on equal terms, the winner must be determined based on demonstrated ability through an established scoring system, and there must be regulated leagues or competitions, made up of professional teams and players. The game must attract a large number of players and be broadcast on online platforms or through some other means of communication or media.

data in esports

The discipline is on a long upward trajectory, and as the popularity of competitive gaming increases, the opportunity around the esports industry has never been greater. According to Newzoo forecasts, global revenues in the esports market in 2020 were expected to exceed $1.1 billion. Although ultimately COVID19 resulted in a stagnation of its growth, which remained at roughly $950.3 million, the forecasts for the next few years still look promising.

Use of data in the esports sector

Esports data technology

There are multiple platforms online, both subscriptions based and free, that allow players to know details of their favourite games, which were traditionally quite difficult to gather and visualise. They enable the detection of poor gambling decisions, they can generate comparisons with other users, analyse and correct errors, show players´ performance trends, and even allow the in-game configuration chosen by professional players to be known.

Blitz, Mobalytics, op.gg, hltv.org, shadow.gg are some examples of platforms providing these sorts of functionalities.

These tools work similarly to typical digital dashboards that you can find in the day-to-day operation of any leading company performing basic business intelligence. The concept here is the same: Data is obtained from one or more sources (in this case from all video game servers) and through the use of a series of data processing and model development mechanisms, visualisations are generated in order to retrieve insights for better decision-making in the next games.

Data science esports

The result is that players seem to be much more developed and with greater skill, contributing to an even greater competitive scene and more hype around these competitions. The better the players, the greater the expectation for each game, which generates more engaged and loyal audiences and, in turn generates more interest from brands to be present in this sector.

Data visualisation has become key to enhancing viewer experience during live broadcasts as it allows video game “noobs” (inexperienced participants) to follow the spectacle and easier understand competitive broadcasting. Data is shown on the screen in different tables and graphics during the broadcast, and it is possible to observe in real-time how the most relevant metrics fluctuate during competitive matches. This generates greater empathy from the first-time viewer, facilitating the monitoring of the game while helping to retain the most informal, and fickle, audience.

All these games, for the most part, are broadcast on online platforms such as Twitch, which for some time has been using an AI and ML-based system to manage and moderate the content shown in the chat in real-time. This technology called AutoMod allows more efficient filtering of inappropriate or obnoxious content, which enables auto-control of chat activity. While similar tools have been around for a long time, embedding ML now allows these platforms to improve their accuracy over time without constant human intervention.

Other applications commonly used within this environment that are based on the use of data through the development of various ML algorithms:

 

  • The bot system: Bots are simple forms of AI that simulate the movements and actions of a human player. These virtual players are used in competitive video games mainly with two objectives; they allow new players to get up to speed with the mechanics of the game before facing other more experienced players, reducing the steepness of their learning curve, and they are useful as a warm-up or practicing tool for more experienced players, even for pros. There are hundreds of bots, but those that are more advanced have been developed using Reinforcement Learning as a form of Machine Learning.
  • The AFK system: the AFK (away from keyboard) player detection system is focused on measuring and analysing the players´ absence from the game. If the system does not pick up any type of input signal for a while, it will assume that this player is not active. Depending on the video game, the action taken by the system will be different, but will most likely result in the player being thrown out of the game. Various programming mechanisms are utilised, including ML models, to assess if the player is AFK.
  • The reporting system: A tool that penalises players based on historical in-game data collected. If a player receives a high number of reports from others, or the system counts several leaves from the game, they will be automatically banned from the system and prevented from participating for a while. ML models are applied which will increase this “punishment time” if the player persists with their bad behaviour.

 

In addition to all these functionalities, there are dozens of studies detailing the ideation of new ML/AI-based tools with which to continue improving different aspects of the game and its surrounding environment.

 

Commercialising / Monetising data in the esports sector

Monetising audiences´ data is another emerging opportunity.

All this growth has forced brands to pay real attention. It has opened the door to a very volatile and complex audience and allowed firms to stand-out with differentiating messages that aim to make an impact on these audiences.

One of the most productive ways that brands have at the marketing level to obtain benefits from esports sponsorships is by acquiring a comprehensive understanding of their audience through data. Data from broadcasting, advertisements and other digital marketing strategies allow brands to measure the performance of their campaigns among young people. These data sets also allow them to create precise retargeting campaigns for brand awareness and target audience segmentation. Brands can gain a full picture of the consumer’s funnel and are able to identify more accurately where to generate a greater audience impact and greater brand benefits. These data will also be useful to develop loyalty campaigns with which to attract the attention of future consumers.

data in esports

A clear example is found in BMW, a globally recognised automotive brand that has fully launched into this sector, becoming one of the main sponsors for five of the largest teams worldwide.

In short, the amount of data emanating from these campaigns allows brands to optimise investments within one of the most important and difficult target groups to reach.

 

Conclusions

At Bedrock, we understand that the demand for data science initiatives within the esports sector will not stop growing any time soon. With massive data sets being collected, it becomes possible to develop strong analytical tools for gamers, teams and advertisers, as well as AI-powered predictive systems that look set to shape the future of professional video gaming.

Stay tuned!

Oct 14 — 2020

Data Science versus Business Intelligence

BI vs Data Science

Intro

Data is everything, everywhere. Imagine that you leave your house in the morning and you start pondering which route you should take to work: Ok, that simple decision based on a quick estimation of time and distance is data-related. From a simple commute, we can see that decisions, either driven or influenced by data, are made unconsciously in our day to day.

Over the past 4 years, I’ve taken part in data strategic initiatives for leading international firms across various industries. Nowadays, I look for  opportunities in businesses, assessing how data can enable our clients to do better.  It is common knowledge that in the second half of 2020 companies are still not seizing the strategic potential of data and,  according to MIT market research, less than 5% of companies use it well enough for gaining a competitive edge. Moreover, I have also experienced how companies manage these projects and the common key challenge relates to stakeholders holding diverse points of view for some basic concepts . Everybody in today’s organisations heard or talked about Business Intelligence (BI) and Data Science while no one has properly given a thought to their meaning. In these circumstances, interactions during the commercial, or operational phase tend to cause frustration, misalignment of expectations and failure. A mistake that organizations clearly make is underinvesting in organisational culture and mindset change which is fundamental for individuals not overlooking the potential of adopting data-driven mechanisms at every layer in the organisation.

Therefore, the main root cause for this lack of success seems to be misunderstanding of buzzwords . After coming to this conclusion, I felt that it could be educational going back to the fundamentals of BI and Data Science, shedding some light on how they compare to each other.

What is commonly understood as Business Intelligence?

Business Intelligence (BI) is a generic term that dates back to 1989, whose original definition was along the lines of “mechanisms and the underlying technology to improve business decisions”. Nowadays it is understood as the development of dashboards, digital reports or ad-hoc analytical visualizations. This may be basic KPIs digitally displayed while other companies may use advanced analytical methods based on statistical models, either way with some governance and security around insights delivery. Regardless of methods or technology, BI aims to provide bullet-proof facts for informed tactical/strategic decision making and the priority is providing actionable information in the hands of the management calling the shots to quickly act on patterns or insights. Concisely put, BI aims to explain past events using data that emanates from the business regardless of it coming from marketing, sales or operations.

Ok, so what is then Data Science about?

Firstly, it is considered a science because it aims to discover the unknown using methodical research and analysis techniques. Its recent success in society is derived from humans being intelectually curious and unwilling to ignore the unknown. It is about explaining and predicting events using a combination of mathematics, statistics, computer science and business knowledge within a domain-specific context.

Data Science versus Business Intelligence

The associated job title ( Data Scientist) has been around since 2008, when it was introduced by D.J. Patil, and Jeff Hammerbacher at LinkedIn and Facebook respectively. It started to be a fancy trend when in 2015 The White House announced the first Chief Data Scientist and now demand for talent outpaces supply. Lately, Harvard Business Review even claimed that it is one of “the sexiest jobs of the 21st century” and some articles claim that data scientists are the new investment bankers but it requires a vast set of skills that is hard to combine. Some of these bright individuals are PhDs in “exotic” fields like biomedicine or astronomy, but the majority have an academic background in computer science, maths or physics. The data science role is about shaping large quantities of messy and disparate data sets to make their analysis possible, developing modeling and prediction mechanisms,concluding what happened and what is likely to happen next.

Data Science versus Business Intelligence

Data Science vs Business Intelligence: commonalities and differences.

In common: Both practices provide fact-based insights for motivating, easing and supporting business decisions but these practices tend to focus on different temporalities.  Also, both approaches require a visualization layer, data management and governance.

Differences: In BI, it comes down to a validated formula or a known method of calculating a KPI. BI provides a reporting mechanism for showing updated values of previously known metrics, dealing with predictable “known unknowns”. BI assists with descriptive analytics mostly.

Data Science versus Business Intelligence

On the contrary, in Data Science, the business comes with questions that were never asked nor answered before so Data Science deals with unpredicted “unknown unknowns”. Data Science, as a field of automated statistics in the form of models, goes further to predictive and even prescriptive analytics, enabling future prediction and aiding in classifying and predicting outcomes.

In essence: BI is about interpreting and visualizing data whereas data science is about using statistics and other analytical tools to forecast what is likely to occur next.

data science

Conclusions

Data Science is not a newer form of BI. Both are critical milestones for any organisation that aspires to be data-driven so:

 

  • Business Intelligence maturity fits well within a Data Science roadmap as a preliminary step to predictive analytics. If you really think about it, firstly, you must understand and analyse past data and extract insights, to later build models that allow you to predict the future of your business.
  • The typical BI project framework is not applicable on Data Science projects because the latter demand specific operational requirements right and the typical IT, Software as a Service, plug-and-play, +configuration, mentality must go through the window.

 

In Bedrock we strive to pave the way for the democratization of Data Science and AI ( in the form of Machine Learning) and our approach is built upon a close collaboration between our team and our clients’ business domain experts, counting on the appropriate support from management. This approach is what guarantees that our projects end up delivering tangible results.

 

Bonus: Many Data Analysts and BI specialists teams have rebranded themselves as Data Scientists and this leads to confusion. Traditional Data Analytics differs from Data Science. Yes, some tasks overlap almost half of the time such as data sets wrangling, crunching, exploratory analysis and data visualization. However, the difference resides in coding, modeling and in using algorithms which is why Data Scientists mostly use Python or R, developing models for correlation, causation, and counterfactuals, trying to guess what is gonna happen. Data Analysts are not data scientists and being able to use SQL, PowerBI or Tableau is only a tiny fraction of what is required.