Chief data officers, a relatively recent breed of executives, are often associated with highly regulated and data-heavy industries like insurance, government, health care and financial services. But chief data officers are beginning to make a play outside of those industries as well, most recently in media.
This week, Time Inc. announced its appointment of J.T. Kostman as its first-ever chief data officer (CDO). And last week, TechTarget, SearchCIO.com’s parent company, introduced Charles Alvarez as its first CDO. Neither have served in a CDO position before (although Kostman did serve as the chief data scientist at Keurig), but both have an extensive history with data, analytics and their supporting systems.
Alvarez’s experience comes from the financial services sector, where he’s worked for the likes of Credit Suisse, Bear Stearns and JP Morgan. SearchCIO had a chance to catch up with Alvarez to discuss his new role and why the complexity of IT systems is casting a shadow on the integrity of corporate data. This conversation was edited for brevity.
Why does a media company need a CDO?
Charles Alvarez: We’re an information company first and foremost. If I’m going to go out and sell to vendors, to purchasers and to people who buy technology or to people who may use this information to manage a hedge fund, for example, I have fundamental principles that I have to keep on top of: I have to be able to maintain the reputation of my firm, which rests on the quality of its data.
And I’ve got to be able to grow and increase sales where I can. I’ve got to be dynamic and agile so I generate new products and generate new services. And I won’t be able to do that if I’m not capable of understanding the information that I have or managing the information that I have.
Alvarez: As it exists, from a legacy standpoint, for 99.99% of the world today, the data has lived in the technology organization. The technologists have to care about it because it’s inside their apps and their databases. If they screw it up, there’s going to be consequences. So from the standpoint of a historical view, it has been managed by IT and it lives in their database.
But what if I’m reengineering how marketing ops manages surveys? [Marketing ops] wants to create a vocabulary for its teams. Now when I want to create a new survey, I don’t have to start from scratch. I don’t have to wonder if this matches the question I asked two years ago because history is important to me, and I can say I have historical sequencing and semantic integrity with time and it’s valid. That’s data in the large.
Data in the small, the stuff that lives in programs, is the stuff IT is responsible for. Data in the large, you have to be concerned about the integrity of the data, you have to be concerned about the processes that create that data.
Why is a ‘data in the small’ perspective too limiting for organizations today?
Alvarez: IT is responsible for the quality of data under its purview. So IT has to know that the business reports this particular item of data as x.y.whatever. But the IT function is not forward-looking, and that function doesn’t understand business semantics. If you stick a number in the database and the top of that numbers says P&L, if you’re in IT, you say, that’s a valid number. To me, as a data person, I’d ask the question is that net, is that gross, are there sales credits, is it compounded, is it a cash number, what currency is it? Now all of a sudden you have a semantic component. Semantics are now important. In the past they weren’t important because we didn’t maintain large time series of data.
That’s the fundamental change that happens.