Markets & Economy

Machine Learning

Machine Learning
  • Download

There has been much excitement recently about the potential of machine learning (ML) to transform the investment world. We recently surveyed investment professionals at BNY Mellon’s investment boutiques,1 and most of them shared in this optimism: most believe the technology will represent a “major paradigm shift” and “significantly impact” long-term investment decisions within the next five years. A minority had more skeptical views, seeing ML as “overly hyped” and predicting only “modest” impact after 10 or more years.

However, most of the investment professionals we surveyed also admitted to having “very limited” or “beginner” level knowledge of the topic. Few provided concrete use cases, especially for long-term investment decisions (decisions with time horizons in months, quarters, or years). In fact, “trading” was the most commonly cited application, and most applications of ML publicized in the media involve shorter-term investment decision-making (horizons in seconds, hours, or days), such as high-turnover long/short strategies at hedge funds.2

This is a problem. If the majority of investment professionals are highly optimistic about the technology and believe it will have major effects on long-term investment decisions, but at the same time they lack deep knowledge of the topic and cannot point to existing or potential use cases, then one of two things is likely:

A. The majority is wrong: ML is overly hyped, at least in the context of long-term investment decision-making, and there are few applications of ML.

B. The majority is right: ML does have applications to long-term investment decision-making, but investment professionals are not well positioned to benefit from their own correct prediction.

In this paper, we aim to help investment professionals resolve this conflict by presenting concrete use cases and examples related to long-term investment decision-making, as well as by providing clear explanations of key concepts and approaches that define ML.3

Our conclusion is that both optimists and pessimists should moderate their views on the topic — ML is neither a panacea nor pure hype, and most investment professionals are not yet knowledgeable enough to make a strong judgment. To deepen their knowledge, investment professionals should start to test real applications of the technology. This may seem paradoxical, as it is difficult to both remain excited enough about a new technology to take action while maintaining healthy skepticism. But like any technology or tool, ML must be understood at a deep level and tested thoroughly in order to provide value to investment professionals and their clients.

Definitions

We will use the term machine learning (ML) to refer to techniques that enable computers to learn without being pre-programmed with explicit rules. ML techniques are nothing new — the term itself dates back to the 1950s. Although many of these techniques are old, recent innovations in computing have enabled practical and low-cost implementation.

Machine learning’s basic premise is the following:

Rather than telling a computer “if A happens, do B, in order to achieve C” you tell the computer “achieve C,” and then supply the computer with data it can use to learn how to achieve C.

Here’s a concrete example: instead of a quantitative analyst specifying covariates in a regression equation (deciding what the ‘x’ is in y = a + b*x), in machine learning, you just tell the computer you want to predict ‘y,’ feed it with data, and let the computer do much of the rest (employing various strategies to avoid overfitting4). Appendix A (“What Is Machine Learning?”) provides a concrete and detailed example.

There is considerable overlap between machine learning and other statistical approaches, many of which have been widely used for decades in asset management.

For example, regression analysis is commonly used in many areas of asset management, and it is also part of some machine learning algorithms. Is regression analysis “machine learning”? Or statistics? Perhaps econometrics? Regression is a tool utilized in all of these areas. That said, machine learning does have its own methods and approaches to problems, distinguishing it from other statistical and computing methods. We will highlight these methods throughout the paper.

A few additional notes on terminology:

  • To avoid ambiguity or confusion, this paper does not use the term “artificial intelligence” (AI).
  • We use ML and machine learning interchangeably in this paper.
  • We use the term “variable” to indicate a vector of information; you can think of it as a column in a database (e.g., the “price” column in a database of security prices).

Security Research and Selection

Across major asset classes, security research and selection form a key part of the active investment management process. When researching and selecting securities, fundamental analysts and portfolio managers spend most of their time looking for “interesting” targets, understanding and analyzing each one at a deep level, and making decisions about whether to include them in a portfolio. On the other hand, quantitative analysts look for patterns in historical data that can be used to select securities for investment.

Both fundamental and quantitative research and security selection approaches could be improved using machine learning.

We highlight four applications of ML techniques to long-term active security research and selection:

1 Automating the Generation of Investment Research Ideas

Machine learning models can help asset managers generate research ideas.

For example, an existing industry model5 aggregates company filings (10Qs and other filings) and uses historical data to predict future price changes for U.S. public equities on different long-term horizons (quarterly, yearly, etc.). After a firm files, the model is updated and outputs a “score” for each name. High scores mean that the machine learning model predicts price appreciation; a low score means the opposite.

This model could potentially be useful to fundamental analysts looking for long-term investment research ideas.

Consider the following four cases:

  1. The model says the security (or sector) is expensive, while the analyst thinks it’s cheap; this challenges the analyst to think twice about his/her analysis. What is the analyst seeing that the model is missing? Or is the model picking up on something he/she hadn’t noticed?
  2. The model says the security is expensive, while the analyst has no opinion; this encourages research into a potential sell or short position.
  3. The model says the security is cheap, while the analyst thinks it’s expensive; this encourages research into a potential buy or long position.
  4. The model says the security is cheap, while the analyst has no opinion; this is a potential new value research idea that the analyst may not have found without the algorithm.

Of course, the model’s predictions would only be valuable if it forecasts long-term term fair value “well enough.” And even if it does, that is not to say the analyst does not “know better” — for example, an analyst may disagree with the way a firm has decided to categorize certain aspects of revenue, or what expenses the firm has decided to capitalize, and both may be important nuances that the machine learning algorithm has no knowledge of. But this is precisely where human judgment can play an important role. By using the machine learning model as a guide for research, rather than a replacement for it, the analyst may be able to spend more time on higher-value activities, and let the machine do more of the initial screening.

2 Making Unstructured Data Useful for Security Selection and Research

Machine learning can help fundamental and quantitative analysts in a much simpler way as well: by making sense of complex unstructured data.

For example, natural language processing (another area where ML has been used) is often applied to unstructured text like social media posts, news, or earnings calls in order to extract words associated with positive or negative sentiment about a stock or other relevant topic. Similarly, large amounts of sensor-generated data like satellite imagery from parking lots, construction sites, or oil rigs only become meaningful after applying ML to capture and count the objects (cars, cranes, oil rigs) in the images. Ultimately, this processed data may help the analyst forecast company earnings, sectoral performance, and macroeconomic trends.

Many vendors have adopted this approach and sell pre-processed data to asset managers. Underlying datasets include satellite imagery, mobile phone geolocation data, news and social media texts, and other online content. Vendors process and aggregate the data in an attempt to make it immediately useful to fundamental analysts.

3 Improving Traditional Quantitative Approaches to Security Selection and Research

Quantitative forecasting methods based on historical data are nothing new in asset management. As one of this paper’s survey respondents6 put it: “We’ve been using these tools (albeit their older versions) for decades as quants […] that said, ML can help us with processing and pattern finding more efficiently.”

The “older versions” include simple (and often effective) forecasting models like ordinary least squares, also known as the linear regression model. Quantitative analysts often explore variations to basic regression models in order to improve predictive power, including variable trans-formations like adding a squared term, interacting variables with each other, outlier exclusion, and other techniques. This process can easily lead to “overfitting.”7 That is, if you manipulate the data enough, you will eventually achieve a better fit to historical data. But this means the model may fail to generalize to “out-of-sample” data. In other words, the model may look good with the data used to train it, but it may have no predictive ability going forward. The textbook “econometric” approach to this problem is to only use variables based on economic theory or “intuition.” Machine learning provides some additional methods to avoid this problem.

In a sense, machine learning is a way of automating the manual process of regression tuning — trying different combinations and transformations of variables and different model specifications to find what works best. Neural networks, for example, take this approach to an extreme. Neural nets have “hidden layers” that transform input variables (when there are many hidden layers, it’s called “deep learning”). These hidden layers introduce such complex, non-linear relationships and interactivity among variables that the resulting model is often beyond the reach of human interpretability. Neural network models can easily result in overfitting, especially with small amounts of training data.

However, machine learning provides some tools and discipline to tackle overfitting. Andrew Ng, a computer scientist and pioneer in online teaching of ML, describes ML as a “battle against overfitting.” The primary tool for fighting overfitting is to split data into pieces — “training” data to train the model, and “test” data to test it. See Appendix A for a detailed example of this process.

Another tool is regularization, which can also be applied to traditional linear regression models to potentially reduce forecasting noise and overfitting. This method simply reduces the magnitude of regression coefficients by a fixed parameter (say, 10%), lowering the chance that any given coefficient is overestimated. The parameter can be varied and tested out of sample for the purposes of “tuning” the parameter to the value that works best. While common in machine learning models, econo-metricians are only beginning to use this method.8

As one of our survey respondents emphasized, a key challenge to predicting security prices or returns using quantitative methods remains: “[the challenge is] adapting the algorithm to changing environments in financial markets.” Even if overfitting is avoided, the relationships among variables in the data may change in unexpected ways in the future. There may simply be no historical data to train the model on, because the world has changed in some fundamental way.

While this issue is not unique to asset management, it is much easier to achieve stable and performant machine learning models in other areas. If your goal is to identify cats in photographs using machine learning, you have one major advantage over the finance quant: whatever combination of pixels signifies a “cat” will always stay the same. In contrast, the combination of factors contributing to stock price appreciation over the course of a year can change significantly and quickly.

4 Replicating an Analyst’s Security Selection

While the typical quantitative analysis attempts to predict security returns using data and models, fundamental analysis adds a more “human” element, relying on the judgment of analysts to determine the potential future path of security returns.

For example, many firms use the “buy/sell/hold” rating system to translate analyst insights into action. These ratings are based in part on available “hard” data (like financial statements) and “soft” data (like management commentary). But they also rely on the nuances and idiosyncrasies of individual securities, which may be difficult for traditional quantitative models to capture. If so, this would make human judgment indispensable to the investment process. Others believe that human judgment, while valuable at times, is subject to behavioral biases that make it unreliable; or they believe that data and models can in fact capture enough of the nuances to reliably outperform human judgment.

Machine learning could potentially help test this hypothesis, and provide benchmarks or guardrails for fundamental analysis, by directly predicting individual analyst ratings. The ML model would effectively be a “robo-analyst,” trained to mimic a specific analyst’s decisions. The input data would be an individual analyst’s historical ratings (for example, over the course of 10 to 15 years for more senior analysts), as well as any available data on security and market characteristics over time — both “hard” and “soft” data if possible.

Consider the following scenarios for a given analyst; assume that, by some metric, the analyst’s ratings are predictive of future returns or otherwise demonstrate the analyst’s skill:

  1. The model fails to replicate the analyst’s decisions. In this scenario, it becomes harder to reject the value of the “human element” in investment decision-making; this result could provide further confirmation of the value of a particular analyst.
  2. The model replicates the analyst’s ratings reasonably well, but the analyst outperforms the model when his/her ratings differ from that of the model. In this scenario, again, it becomes harder to reject the value of the “human element.” Here, the model could still be used as “guardrails” for the analyst’s decisions in the future (e.g., imagine a system that pops up the following message: “The robo-analyst says ‘buy’ and you say ‘hold’ — in 72% of cases like this the robo-analyst is right and you are not; do you want to reconsider?”).
  3. The model replicates the analyst’s ratings with a high level of accuracy. In this scenario, the analyst is challenged to change his/her methods to improve upon the model.

A word of caution: This is a hypothetical example, and the exercise may not be feasible in many cases due to data limitations. For example, historical ratings data may not be readily available. Data limitations may also hinder the interpretation of the results: for example, if individual analysts generate very few ratings over time (e.g., in low-turnover, long-horizon equity strategies), then the ML model will likely have insufficient training data to replicate the analyst’s ratings with any degree of accuracy. If the analyst’s ratings are highly predictive, this result might still confirm the analyst’s skill, but it would not mean that an ML model could never work; for example, the analyst could provide more training data. Similarly, historical data on security characteristics and market conditions may not currently be rich enough for the ML to put up a “fair fight” against the best analysts. Again, while this wouldn’t invalidate the conclusion that humans add value, it also does not mean that, with richer data, ML couldn’t “win” in the future.

Limitations aside, the thought exercise may be productive; and in some cases, it may be feasible to implement and test this use case.

Asset Allocation

Asset allocation is a more challenging use case for machine learning because the volume of data is smaller.

Once you aggregate to the asset class level, you’re left with a relatively small number of entities (asset classes or risk factors) and usually a fairly limited time series (at most ~100 years, but even then with significant auto-correlation9 and changing relationships among the time series). The size of the “independent” sample is relatively small.

That said, machine learning may still contribute to the asset allocation process.

Multi-Regime Identification

During the asset allocation process, “regime” identification plays an important role. Individual asset classes show different characteristics during different time periods (regimes), implying different optimal asset allocations depending on the regime. Machine learning may help identify regime changes, or even discover brand new regimes. In particular, a branch of ML called “unsupervised learning” could play a role in regime identification. These models “learn the hidden structure in data;” for example, retail firms use these methods for client segmentation to group clients across many different attributes. Techniques such as clustering, density estimation, and anomaly detection fall into this category. In the context of regime identification, given a set of variables on market conditions and macro factors, unsupervised learning algorithms could potentially detect regime changes or even discover anomalies from past clusters, which could represent new regimes.

Factor Selection

Using machine learning to select the factors themselves may be more challenging. The size of samples (asset class-level returns) relative to the number of potential factors is small. This means the likelihood of overfitting is high. Without clear guidance from theory or intuition, a purely quantitative approach is likely to lead to “data mining” — i.e., finding spurious relationships that fail to generalize out of sample. With strong discipline, it may be possible to avoid overfitting, and some methods and tools from machine learning may well be useful for factor selection. But the direct application of ML to factor selection seems less promising than other use cases.

Risk Management

Risk management is a core component of the investment management process. After a portfolio has been constructed, the process of ongoing risk management becomes crucial to maintaining and enhancing portfolio performance. Machine learning methods may improve the risk management process as well.

Amazon “Alexa” for Portfolio Managers

Risk management encompasses a wide range of activities, from basic monitoring of positions and prices to sophisticated stress tests, sensitivity metrics, and scenario analyses. In some cases, the data required to perform these activities is readily accessible; for example, commercially available risk systems often produce this data automatically. In other cases the data is not readily accessible, and manual work is required to generate Excel files and reports.

In either case, when a portfolio manager wants to know, say, the effective duration of the fixed income component of a portfolio, either the portfolio manager needs to log into a system to find out, or someone has to look it up or calculate it for them. With machine learning — plus some good data engineering — a voice-enabled bot like Amazon’s Alexa could potentially do the job faster. In fact, Alexa is already being used on trading floors.10

The simple use case of “basic facts” will help illustrate the idea. Basic portfolio characteristics and analytics typically live in a database that could easily be accessed by a machine learning application.

All the application would need to do is the following:

  • Run standard speech recognition models that convert voice to text; the portfolio manager would say out loud, “What was the three-month tracking error of account 1234 on 3/1/2018?” and the model would convert that to a text file on a computer.
  • Run standard natural language processing (NLP) models11 to identify sentence structures (known as “part-of-speech tagging”).
  • Figure out what the user wants based on identified inputs (e.g., metric = “tracking error,” account = “1234,” horizon = “3 months,” as-of date = “3/1/2018”) and map that to a data extraction process, e.g., an SQL query.
  • Get feedback from the user so it can “learn” to improve automatically.
  • If this application was “good enough,” it could minimize the amount of human error, save time and resources, and potentially be applied to more complex use cases like market updates, scenario analysis, sensitivity analysis, and more complex risk metrics.

    Liquidity Management

    A key risk management function for some portfolio managers is liquidity management. Low-liquidity asset classes like high yield bonds, private real estate, or small-cap stocks are subject to redemption risk: clients could demand liquidity faster than managers can liquidate assets without affecting their prices. Therefore, managers must optimize liquidity by attempting to minimize the impact of client withdrawals on investment performance, while also avoiding too much cash drag.

    There are two main “unknowns” in this problem: one is the cost of liquidity — how much of a hit would the manager take to liquidate 1%, or 2%, or 5% of the portfolio? The other is the probability and magnitude of redemptions. The latter may be a good use case for ML. The IM Data Solutions team prototyped an ML model in 2015 to predict outflows for U.S. mutual funds. The model used past flow data, performance data, and industry flow data to predict a fund’s flows one month out, and achieved a 22% reduction in the error rate12 relative to a naïve “martingale” model.13 The financial impact of employing such a model has not been established, and would depend on the specific application (e.g., high yield fund vs. small-cap stock fund). This could be a fruitful area for further work.

Challenges

It is not easy to apply machine learning to improve the investment decision-making process.

Some key challenges include:

1

Tool Obsession
When you have a hammer, everything looks like a nail. Many problems can be solved with simpler tools, and these tools should be tried first.

2

Skills Gap/ Talent Shortage
Many asset managers do not have ML experts, and ML experts are costly due to talent shortages. Also, pure ML experts without domain knowledge may be both expensive and unproductive.

3

Cost of Technology Implementation
In order to implement and maintain ML systems, significant investment is required in software, systems, and data.

4

Mindset Shift
Some business leaders and portfolio managers will reject the idea of using ML because they don’t understand it; others may not accept the idea of using “black box” algorithms that are not easy to interpret (even though human decision-making is often even less interpretable); others may simply be unwilling to change, for no particular reason (also known as “status-quo bias”).

5

Model Overfitting
If machine learning is used inappro-priately, the risk of overfitting can be high, leading to poor performance when the model is applied to real decision-making situations.

6

Bad Data and Legacy Systems
As a survey respondent put it, “How do you implement AI with out-of-date systems?” Poor quality data and cumbersome or messy data systems can impede machine learning applications; in some cases, the required historical data may simply not exist, due to system limitations or poor data management.

7

Not Enough Training Data
For some use cases, even when high-quality data is available, ML may not work well due to limitations in either the number of observations (rows of data) or the number of variables (columns of data), or both. For example, applying machine learning to predict a future U.S. recession is challenging because there are not many historical examples of recessions.

8

The Competition
Tech firms like Google and Amazon have the technology infrastructure, ML skills, unique datasets, and the financial resources to hire domain experts. These firms could compete with quantitative asset managers if they chose to enter the asset management business. In addition, incumbents like Two Sigma have already made large investments in infrastructure and skills, creating barriers to entry for other asset managers to replicate their success.

Conclusion

Our hope is that this paper sheds light on some existing and potential machine learning use cases for long-term investment decision-making, and that it will give the reader a balanced view of ML in this context. Machine learning is neither a panacea for investors nor an overly-hyped fad. Rather, it is a powerful tool, and, like any tool, it has both productive and unproductive uses.

Machine learning should not be over-exalted. There are many decisions and processes in asset management where ML will simply not succeed. But a distinction should be made among the reasons why ML does not work well: are they fundamental reasons — a decision or process that “cannot” be learned by a machine, for some reason, or are they potentially temporary practical reasons, like data limitations that could be overcome?

Machine learning should also not be dismissed. It is a powerful set of tools that will continue to improve over time. Despite the many challenges in finding good use cases and executing on them, ML has the potential to add value in many areas of investment management.

Finally, we recommend specific action: go try it. If ML can help improve outcomes for clients, no investment professional should shy away from it — even if it presents a career threat. Investing in pilot projects and low-cost tests could yield a significant return on investment.

Bibliography

Athey, Susan. “The Impact of Machine Learning on Economics.” NBER Working Paper (January 2018).

Lopez de Prado, Marcos. Advances in Financial Machine Learning. New Jersey: John Wiley & Sons, 2018.

McNelis, Paul. Neural Networks in Finance: Gaining Predictive Edge in the Market. Burlington: Elsevier Press, 2005.

Mullainathan, Sendhil and Jann Spiess. “Machine Learning: An Applied Econometric Approach.” Journal of Economic Perspectives, Volume 31, No. 2 (Spring 2017): 87–106.

Appendix A:
What Is Machine Learning?

Machine learning is a way of making a computer do something without explicitly telling it what to do; instead, you give the computer a goal, and the computer “learns” what to do.

But those are just words.

This appendix demonstrates what ML is, using a simple example. Although it’s an original analysis using real data, it’s not intended to be especially impressive or useful for any particular decision.

It’s only intended to demonstrate how machine learning works in practice.

Problem Statement

Let’s say you have a dataset of 2,757 passive mutual funds and ETFs from the Morningstar fund database. Each fund is categorized as Equity, Alternative, Fixed Income, or Allocation:14

For each fund, you also collect 12 risk and return metrics15 for a given point in time (October 31, 2017):

Now, suppose you have a new list of 1,182 different passive mutual funds and ETFs (no overlap with the original 2,757). But for these “mystery funds,” you only have the risk and return metrics — the labels are missing:

Of course, for the two funds in the example above, the correct categories are obvious from looking at the names. And if you know something about what’s been going on in capital markets recently, you might even guess that a 2% monthly return is likely an Equity fund, while a 0.01% return is likely Fixed Income.

But could a machine learn that? Could a machine fill in the category based solely on the risk and return numbers?

Results

It turns out the answer is: yes, it can. We ran a basic, out-of-the-box machine learning model using this data, and achieved an overall accuracy of 94%.16 To put that in perspective, random guessing would achieve 25% accuracy; just guessing everything is Equity (the largest category) would get you 64% accuracy. Accuracy of 94% means that out of the 1,182 unlabeled “mystery” funds — funds the machine had never seen before — it only got 75 wrong.

Training and Test Data

Our original data set, the 2,757 funds, is called “training data.” It’s called that because it’s the data that gets the machine ready (trained) to do the job we want it to do (correctly categorize funds). The machine uses the training data to “learn” how risk and return data is related to categorization.

Here are the results by category:

After the machine has been trained, we give it the risk/return data for the 1,182 funds that are missing a category. This is called “test data.” The machine has never seen these funds before, but it will try to use what it’s learned from the 2,757 funds in the training data to guess at the right category. This is the best test of whether the algorithm really works. In traditional quant finance parlance, the “test data” is “out of sample.”

Just to be clear, when we feed the machine the data about the 1,182 funds, all it sees are those risk/return numbers. Nothing else. It doesn’t know the fund names. It doesn’t know these are passive funds. It doesn’t know the as-of date for the returns. And of course it doesn’t know any finance.

For example, it doesn’t know that bonds represent senior claims on assets, while stocks are residual claims, so that for a given issuer a stock needs to compensate investors for the higher risk with higher returns relative to a bond. It doesn’t know that as of 10/31/2017, global oil and gas stocks hadn’t done too well on 1-, 3- and 5-year time horizons, in contrast to the U.S. equity market average, or that utilities sectors in some countries provide bond-like returns. It just knows the risk and return numbers and the labels for the 2,757 funds we gave it in the training data. It needs to somehow learn from that training data what makes an equity fund an equity fund, and what makes an alternative an alternative (at least, according to Morningstar).

In case you think this is always easy and obvious, consider this real example:

It’s not obvious from just those data points what each fund is.

Of course the training data has 12 risk and return fields, so there’s a lot more information than these two metrics. But categorizing funds accurately is not as easy as coming up with a simple rule like “if a fund has a 3Y return greater than X% but less than Y%, it must be an equity fund.”

The Algorithm

The machine uses the training data to find relationships between the numbers (the risk and return metrics) and the labels (Equity, Fixed Income, etc.). The specific algorithm used here is called the “k-nearest neighbors” algorithm. What the algorithm does is take each of the 1,182 “mystery funds” (the unlabeled test data) and find which of the 2,757 labeled funds (the training data) it looks the most like, in terms of all the different risk and return metrics. Once it has found a peer group of similar funds, it simply guesses that the mystery fund’s category is the majority category for the peer group.

To take an example, let’s say we only have two metrics, 3Y Net Return and 3Y Sharpe Ratio, instead of 12. If we plot these two metrics for the 2,757 funds in the training data, it looks like this:

Now let’s look at one of the mystery funds, call it Mystery Fund #1. It has a 3Y Net Return of negative 22% and a 3Y Sharpe Ratio of negative 0.2. Which of the training data funds does Mystery Fund #1 look the most like?

Most of the funds in the training data that are near Mystery Fund #1 are orange, meaning they are categorized as Alternatives. One is an Equity fund (the blue dot). So the algorithm, roughly speaking, will guess that the mystery fund is an Alternative, since the majority of funds near it are orange. We picked a relatively easy example, but, as you can see, if the mystery fund were somewhere in the top right quadrant of the chart, where there is more overlap across categories, it would be harder to guess based solely on these two metrics.

The algorithm does this same procedure with all the metrics at once, not just two. To get a bit more technical for a moment, the algorithm first maps each fund’s normalized17 risk and return data into m-dimensional Euclidean space, where m is the number of risk and return metrics (in this case, 12). You can think of it like this: instead of taking two metrics and making a two-dimensional scatter plot like the one above, the algorithm takes all 12 metrics and make a 12-dimensional scatter plot. Then, for each “mystery fund,” the algorithm finds the k nearest funds from the training data (nearest in terms of Euclidean distance). The number k could be anything, and you can test to see what works best (in this case we used k = 5, so the five nearest funds). The algorithm then calculates what the majority category is for those k “nearest neighbors” in the training data, and uses that to guess the category for the mystery fund. Pretty simple!

Man vs. Machine

Simple algorithms can be very powerful. Or maybe the problem is just really easy if you know a bit about finance?

To answer that question, we asked a colleague (Bob) to do the same exercise as the machine. Bob knows a lot about finance. He has a CFA, 30 years of experience as a quantitative analyst, and looks at Morningstar data like this every day. Bob got to look at our training data, just like the machine did. We also told Bob that the metrics were as of 10/31/2017, and we included the names of funds in the training data. In other words, we gave Bob a slight edge by giving him a little more information (context) than we gave the machine. Then we gave him a random subset of the 1,182 “mystery” funds — 100 funds in total — with just the risk and return numbers (no names, no categories), and we asked him to guess the correct category.

Bob did really well. But not as well as the machine.

Here are the results:

Recall that random guessing would get you an accuracy score of 25% and guessing everything is an Equity fund would get you 64%. So it looks like Bob knows what he’s doing. And he did outsmart the machine for three funds. Overall, though, the machine did better (91% for the machine vs. 71% for Bob). Interestingly, for the six funds that both Bob and the machine got wrong, they guessed exactly the same (wrong) category:

This demonstration is a simple example of a particular type of machine learning — supervised classification. There are other types of learning too, and of course the algorithms and applications can get much more complicated.

Appendix B:
BNY Mellon IM Survey of Investment Professionals

We surveyed senior leaders and investment professionals across BNY Mellon investment boutiques and other functional areas. Thirty-eight (38) people responded, including CIOs, PMs, traders, and both fundamental and quantitative analysts.

Results

Most respondents believe AI/ML will be a “profound/major paradigm shift” in general; and most believe it will “significantly” impact long-term decision-making in asset management.

The majority of respondents also believe that the wide application of ML to investment management is either already here or will be in the next five years.

In terms of where ML might be most applicable, “Trading” was the most common response, but the majority of respondents agreed that other areas would be impacted as well.

All that said, the majority of respondents did not feel they were expert in ML; half consider themselves “beginners” and 13% said they had “very limited” knowledge of ML technology.

Select Survey Comments Received

Many participants also responded to an optional open-ended question:

What opportunities and/or challenges do ML/AI pose to your future business?

Below is a selection of responses:

The authors would like to thank Alicia Levine, Jamie Lewin, Charlie Dolan, Dave Daglio, Lee Bollinger, Andrew Plumb, and Nick Greenland for their contributions and helpful comments to this paper.

1 See Appendix B: BNY Mellon IM Survey of Investment Professionals.

2 For example, hedge funds like Two Sigma, Citadel, and Point72 currently employ ML techniques; the “AI Powered Equity ETF” is run by IBM’s Watson; and many smaller hedge funds and trading shops use ML for making investment decisions. As Hiromichi Mizuno, the CIO of the world’s largest sovereign wealth fund (Japan’s GPIF), put it: “[…] artificial intelligence will be able to either replace or enhance the asset managers’ work, particularly for short-term trading.”

3 See Appendix A: What Is Machine Learning?

4 For a definition of “overfitting,” see Security Research and Selection; use case #3.

5 See http://trill.ai/why-machine-learning-now-has-a-place-in-long term-asset-management/.

6 See Appendix B: BNY Mellon IM Survey of Investment Professionals, for more detail.

7 https://www.coursera.org/learn/machine-learning/lecture/ACpTQ/the-problem-of-overfitting.

8 Athey, Susan. “The Impact of Machine Learning on Economics.” NBER Working Paper (January 2018).

9 Highly auto-correlated data means you don’t have as much data as you think; e.g., measuring someone’s height every day for one year does not mean you have 365 independent samples, because the observations are not independent.

10 https://www.bloomberg.com/news/articles/2018-03-26/jpmorgan-brings-amazon-s-alexa-to-wall-street-trading-floors.

11 NLP was discussed previously in Security Research and Selection.

12 Root mean squared error, to be precise.

13 A “martingale” is a model in which next period’s redemptions are assumed to be equal to this period’s redemptions.

14 Asset class categorization and active/passive distinction as defined by Morningstar.

15 Specifically: Net Return for 1m, 2m, 3m, 6m, 1y, 2y, 3y, 4y, 5y time horizons; Sharpe Ratio for 1y, 3y, and 5y.

16 Accuracy is usually not the best metric to use, but we chose it for simplicity: Accuracy = number of funds classified correctly divided by total number of funds.

All investments involve risk including loss of principal. Certain investments involve greater or unique risks that should be considered along with the objectives, fees, and expenses before investing. Asset allocation and diversification cannot assure a profit or protect against loss.

Views expressed are those of the advisor stated and do not reflect views of other managers or the firm overall. Views are current as of the date of this publication and subject to change. Forecasts, estimates and certain information contained herein are based upon proprietary research and should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this material may be reproduced in any form, or referred to in any other publication, without express written permission. The Dreyfus Corporation and MBSC Securities Corporation are companies of BNY Mellon.

© 2018 MBSC Securities Corporation, 225 Liberty Street, 19th Floor, New York, NY 10281.

MARK-26688-2018-04-26