The Influence of Big Data on Central Banks

0
126
The Big Data banking evolution is underway, with 23 out of 42 central banks reported having an active interest in Big Data.
Photo courtesy of Hernan Pineira. http://bit.ly/2zD7K0D

Big Data is a popular topic in central banks around the world.

According to a joint survey conducted by BearingPoint and Central Banking, which is titled Big Data in Central Banks, 23 out of 42 central banks reported having an active interest in big data.

A number of central banks around the world have already started creating departments that will be dedicated to Big Data in search of deeper insight into the economies they manage.

“Isaac Asimov once said, ‘I do not fear computers. I fear the lack of them,’” said David Hardoon, chief data officer at the Monetary Authority of Singapore, in a recent speech. “We are now starting to put in place the necessary tools, infrastructure and skillsets to harness the power of data science to unlock insights, sharpen surveillance of risks, enhance regulatory compliance and transform the way we do work.”

Public sources like Google Trends and job websites can help predict the present (also known as nowcasting) when it comes to economies, in addition to other confidential data such as credit registers that can help identify a bank which is struggling.

The most recent shift towards big data is more of an evolution rather than a revolution, triggered by two things – the emergence of new opportunities as a result of rapid technological progress and experience gained over multiple years of crisis management.

This Big Data banking evolution of sorts has already proven to be successful. Today’s policymakers have access to a large number of datasets, which are often very different in nature and scope. Some have come about as a result of new financial regulations, while others are a result of the increased use of technology. When used together responsibly, they can help policymakers extract more timely and diverse economic signals, making them a valuable complement to existing data.

Policy Analysis with Big Data

The “Billion Prices Project” is an early example of online data being used for policy analysis. It was launched by the Massachusetts Institute of technology (MIT) in 2008. Through the project, price indices for more than 20 countries are posted online every day using a technology known as “web scraping” – a process in which price information is automatically collected by a machine from hundreds of retailers that also have physical outlets. By 2015, about 15 million prices were being collected every day from 900 retailers.

The advantages for policymakers are fairly obvious – online inflation data, of high quality, is much timelier than current price statistics and may cover a much larger number of products.

Eurostat currently uses the 28 European Union member states’ national statistical institutes to collect around three million prices for the Harmonised Index of Consumer Prices each month and publishes the index 17 days after the end of each month.

This online price data is being used to improve short-term forecasts and to check the robustness and reliability of current price indices. Mixed frequency models would also be another way to enhance existing forecast models that are based on monthly data.

However, online prices are only one point of interest for policymakers. Researchers at the European Central Bank (ECB) have looked into large-scale barcode scanner data to investigate which factors determine prices and the degree of price dispersion. This falls under the umbrella of the Eurosystem Inflation Persistence Network. The data consists of 3.5 million observations on the price and quantity of individual products sold and has confirmed that competition at the producer and consumer level is a key factor when it comes to micro price-setting.

The insights gained from the use of online and scanner data have also encouraged the ECB to make micro-price research a priority between 2018 and 2020.

The ECB uses Big Data to improve business cycle analysis as well. For example, search data from Google has been suggested as a potentially valuable source of data for policymakers. In 2012, Hal Varian and a coauthor showed that Google searches can help predict economic activity in a book titled “Predicting the Present with Google Trends.”

Continuing to build on this idea, staff from ECB explored the possibility of nowcasting unemployment using volumes of data reported by Google Trends for search queries broadly related to unemployment. They found that many of the search terms correlated with unemployment and may reduce forecasting errors by up to 80%.

Similar to this, electronic payment data from credit cards and cash withdrawals from ATMs have also been shown to help forecast private consumption in addition to GDP growth as long as the data is made available in a timely manner. So far, the internal findings suggest that in some countries, there is a strong correlation between payment data and private consumption.

Data Governance

A major concern for central bankers is data management – just over half of survey respondents (Big Data in Central Banks) said their banks have no clear data governance.

While survey respondents noted that a significant data governance gap exists, they also noted that in most cases the issue is being addressed.

“We are working on designing the necessary policies to implement institutional data governance, as well as on the proposal to create an organizational structure that supports this process,” said a respondent from the Americas.

When moving from macro-level to micro-level data and statistics, the components of governance must be revisited to make sure they apply to granular data. This means finding flexible methods that clearly define the roles and responsibilities of each actor, and organizing access profiles, controls and audit trails at each point of the production process.

Data governance is an important challenge for central banks. Many acknowledge that data and statistics are strategic assets for the institution and are initiating organizational changes to create chief data officer functions as part of their effort to streamline and enhance data governance across the organization.

A little less than half of survey respondents reported that their central bank had data governance guidelines in place. These respondents came from ten European central banks, four from the Americas, three from Asia, three from the Middle East and one from Africa. The majority of the results were made up of developing and emerging-market economies.

A common theme among comments of respondents was the departmental assignment of data management. A European central banker said: “The statistics department is responsible for the data collection [for] both statistical and supervisory purposes, but the increasing need for micro data requires us to rethink our current processes and create a strategy in data management.”

Whereas a respondent from an industrial economy described their approach to data governance as the responsibility of the statistics department in co-ordination with IT: “Both departments should aim to promote the organization of information architectures, the definitions of concepts, and the creation of metadata and catalogues, dictionaries and repositories of information. Each department of the bank has a data steward responsible for the content and management of the data it produces. There is also a master data steward, who co-ordinates the activity and oversees general guidelines.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here