Good morning, good afternoon, and good evening, everyone. My name is Jesse Tipton, field marketing manager at Quest, and I'm your host for today's session. Thank you all for joining us today for our webcast, Speeding Data Product Delivery from Model to Marketplace.
A couple of things to note before we begin. First, please type any questions you have in the Q&A field, and we will answer it at the end of the presentation. Second, today's session is being recorded, and it will be available to view by the end of the week on erwin.com.
You will also receive a follow up email with a link to on-demand recording. With that, let's get started. Our presenters today are Susan Lane, data thought leader and Yetkin Ozkucur, professional service director here at Quest. Welcome, Susan.
Thank you. Hello, everybody. Great to be with you today talking on this really modern fun topic of data product delivery from model to marketplace using Erwin.
We recognize, along with the many clients that we have and our field, that Gartner as well is talking a lot about data products today. And being that the number one issue that we still see-- I was on a TDWI webcast a couple of weeks ago, and the number one issue or priority for catalogs and marketplaces today was to be able to easily find and use data. The number one issue is end user adoption, and I really think that this new method of building products using data is helping in both of those avenues.
It's helping from a standpoint of structuring the data for business value through a product, and it's also helping in a way of being able to shop and share those products. And that's what we're here to talk to you about today. So let's start with what is a data product.
From our perspective, a product is something that really facilitates that end user need or requirement through the use of data. And we see data modelers, data scientists, data analysts, end user data consumers, and decision makers and everybody that's really in that DNA intelligence environment as the primary stakeholder requesting the products and building those products and then shopping and sharing them in the marketplace.
So there's a lot of stats that are out there right now that this is really a viable way to go and something that's going to be around for quite some time. There's three key attributes of a data product, the first being it must be accessible. So it must be available and easier for end users to grab and use.
It must be well-curated so that they understand what that product does and what potential insights they're going to be able to gain from that product. It must have some clear business value and a way to measure that incremental business value around the product itself. And then somebody is owning it, observing it, ensuring that it stays on track with the original intention of what that product was built for.
A really simple example that most of you can relate to is some banking problems. So if you're needing some data to back up, where is a great place to build a new physical branch location? Should that branch have ATM? Should it be a full service branch? How should it be staffed?
If that's the business problem, some of the components of a data product might be what are the data sets, customer addresses, loan histories, external data sets. So maybe I don't have all the data that I need, and I need to go reuse some purchase data or go purchase some data to have more data behind what I'm trying to do. And maybe there's already an AI model that's out there today that I can use for customer segmentation and, of course, being able to understand any logical models, any reports that have already been built to understand those different customer addresses and history of the loan, et cetera.
So, some potential insights could be that you learned from this product is how to adjust staffing accordingly, how to improve the customer experience, how to align my purchases for that customer segment. So this is just a real life example of what a product-- how a product is built, how it's used, and potential insights from a product. So within Erwin and in speaking with Gartner, we just had a meeting with Gartner yesterday, and they see a lot of the same issues out there when it comes to data governance, when it comes to data intelligence, and now when it comes to data products, where we're just standing up these things without any real clear intention and use cases behind it.
So for us at Erwin, it's really important for us to stay use case-driven, and we're a model first company. So if you're really looking to put some structure behind your products, we suggest that you start to iteratively model out those products. So instead of, you know, building that huge enterprise conceptual, logical, or physical model, you can also start building out models that relate specifically to a product, pass them to the catalog, which, there is an Erwin DI catalog.
A lot of people don't know that. But the DI catalog supports all that curation around the inventory of your systems, the data lineage, and also produces code. So if you have new code that needs to join data together to create that product, the catalog can produce that code. Then it's put back to the business to curate that information, to govern it, to associate it to any of those regulations or controls around the data that you need to adhere to.
We observe that data so you can subscribe to that data product and understand when the quality might go bad on that, or maybe when bias or data tends to drift and be alerted to the changes, to the data, to the changes, to the business processes behind that product. Finally, scoring is a new method of not just understanding the quality of the data but also understanding how well-liked that data is and how well-curated the data is. And we classify that data into gold, silver, and bronze data.
This is something that is produced through an algorithm through Erwin. It's not something that you're going to manually create the gold, silver, and bronze data, and you'll see this in the demo. And finally, having that product out there easily accessible and shopped for inside of the marketplace-- so, I don't want to spend a whole lot of time on slides, but we have a workflow process behind creating that iterative model, mapping it out inside of the catalog, curating it, and then having it online in a marketplace to shop, share and compare and see the scoring on that product.
Why leverage data modeling? From our perspective, it gives you that blueprint of what you're about to create, and it stands up everything, puts everybody on a common page as to what is that product and how does the business view that product, what insights do you want from that product. The benefits of a catalog is that it's going to bring you automation.
It's going to bring you that inventory of where that data resides. It's going to bring you that inventory of the business assets that you want to connect to that information, and then it's going to also bring you the data quality behind-- data quality is built into our catalog. So it's going to give you that visualization of where is my good data, where is my bad data, and where is that propagating, where is it sourced from, how close is that data sourced from the original system, et cetera. So a lot of really good visualizations that you're going to see in the demo today.
And then, of course, having that marketplace-- from an end user adoption, just to hit on the key point on why some of these implementations are failing, is because it's not an easy concept to grasp of a catalog or data governance. The marketplace makes it real and a really easy concept to shop, share, and compare data.
So nobody wants to be responsible for sending bad data down the line or to other departments. We highly recommend that you start with governing and curating that data and ensuring that you have all the entitlements around that data before shopping and sharing it in a marketplace but letting that be that safe zone, that safe place to go and get the data that you need for your product. The marketplace has four key concepts, data sets.
We also store models, and I'll let Yetkin can show you the models when he demos. But that's going to give you the transparency into what are the data sets behind these models and what are the guardrails and controls that I need to be aware of when I'm using this AI model. We also use that data quality solution to monitor the biasedness of the data behind the AI model.
Scoring I already talked about, but we also have support for third party data as well inside of the marketplace so you don't have to go to yet a different solution to look for third party data and ensure that it's governed correctly. So, lots of work being done from a glass box perspective. We know that AI is out there in so many different areas inside of your organizations today, and people are really trying to get a handle on that and bring that transparency to the table for AI as well.
Also, really taking a look at monetizing the data, ensuring that you have that gold, silver, and bronze classification around the data. And from a purchasing perspective, whether you're purchasing the data online and then having chargebacks through the different departments inside of your organization, we allow you to do that within the system as well. And then finally, setting up your third party data so that you can create the entitlements and the governance that's needed around that third party data before redistributing it across the organization. So with that, I'm going to hand it over to Yetkin now so you can get a real good visualization of how that marketplace works.
All right, thank you very much, Sue, as I am starting to share my screen. So I prepared three small demo scripts for you guys today. We will be looking at the data products, how to browse, how to compare, how to shop and get access, get your hands on the data. And then we will look at the third party data sets and how our tool helps you to manage the third party data sets.
And finally, least but not last, it is the AI models, and I'm going to show you what we do to govern and monitor the AI models inside our system. So let's begin with our first small use case, and we will be looking at data products. I click on this button, and I can get a whole list of data products. I can explore them by category, I can filter them by different parameters.
So for the sake of the story, let's assume I'm someone in the marketing department, and I'm looking for some data to understand our customer interactions on our website and provide a better improved quality of experience, right. So I'm looking at the different data products here. I can see there's Customer Care Plus.
There's Customer Data IQ, and there's a conversion craft. So there are a couple of-- NCX Master. There's a couple of data products which gets my attention.
I can look at the text. I can see that they have some confidential information, some sensitive information, and I can look at this batch here, which shows me gold, silver or bronze, which gives me an idea of what the data value score, which Sue was explaining. So if I mouse over one of them, you can see how this score is calculated.
So I can see that this gold has good quality, it's good ratings, and it's been curated well. So that's why it has an overall 90% score, which translates into a gold badge. So to continue our story, the next thing I want to do is I want to understand more about these couple of the data sets-- I'm sorry, data products-- I'm seeing here.
So what I will do is I'll start comparing them. So I'll start picking the data products I want to add to my shopping cart or compare screen. And I have these three products here.
And when I hit the Compare button, it will provide me a side by side comparison. And again, it's one of the familiar views the end users will be familiar for many shopping sites, like Amazon-like websites. So you compare the artifacts side by side. Exact same comparison, right? Exact same experience.
So I can see which domain they belong to. I can see the definitions. I can see the overall ratings. And if I keep scrolling, I can see who are my data product owners, who are my stewards.
And at the bottom, I have some additional insights. Like, for instance, I can understand the cost. I can understand what's the expected insight from this data product, et cetera. So this side by side comparison is really handy to quickly find what you are looking for, what's the fit for purpose type of data sets or data products.
So let's assume after looking closer, I'm particularly interested in this data product, right? This Customer Care Plus. So what I will do as next is I'll click on it, and I'll drill into some details.
So now, I'm looking at the data product called Customer Care Plus. I can look at the description. It says customer interaction, inquiries, and service requests. It sounds very nice.
And as also Sue mentioned, a data product might consist of various artifacts. Like, it consists of various data sets or data reports, database tables. There are some definitions, business terms.
There are some contracts and SLAs. If I want to have access, what are some of the, you know, SLAs need to be aware of? That might be an AI model behind the scenes, you know, generating some output, et cetera, et cetera.
The best way to view these different artifacts is we usually go to Mind Map. It provides you a 360 view of the data product you are looking at. So I'm looking at this customer care plus product.
On the left hand side, I have all the business context. I can see the policies applying to that. There are some compliance policies.
On the left hand side, I can see what are some of the tables and columns participating to this. So, just to explain easier, so I'll just expand them all. So you can see in the middle, we have our data product. On the left hand side, I have all the technical assets.
These are the tables and files and reports, whatever the physical artifacts we scanned from the source systems. And on the right hand side, you have all the business metadata, what we call not only the definitions, business rules, business policies, contracts, SLAs, data sharing, agreements, et cetera, et cetera. All this information is available here with color coded, and you can see the different color codes, what they mean, here in this small legend.
So again, this view gives me a very good overview. At a high level, it means, like what happens if I request access to this data product. Like, what are some of the policies, what are the tables, et cetera.
But let's assume if I want to drill in further. Let's assume I'm really interested in where this data is coming from, how is it calculated. So I'd like to see the lineage. I'd like to drill into more details before I get my hands on this product.
So what I will do is I'll just go to this table here, which shows me an overview of the overall data quality. And it has a red log icon here, which indicates it has some PII, some sensitive information, in it. And I'll just click on this table to look at closer to the table definition.
And again, I can see who are the technical owners of this table, who are my stewards, et cetera. But what I will do is in this demonstration, I will quickly go to the data lineage. Now, it provides me an overview of how this data lineage looks like for this particular table I picked. I can see the source system is coming from SAP, some third party flat files.
I can see how this information has been modeled. Of course, we have Erwin. At any given time, you can see the associated Erwin models to that.
And as I go through this hops, I can see that it goes through the staging area, to the warehouse, and downstream, it goes into some reports in my analytics environment. And all these information, what you see on the screen is automatically built with our connectors. So we built the lineage automatically using our connectors. What I can do here is I can drill into any level of detail.
I can drill into, let's say, a column level information. I can blend in some additional information. I can say, yeah, you know, I'd like to see the sensitive information. It was showing me those lock icons. So now I can see which fields are particularly flagged as PII.
Or I can blend in the data quality score. I can see at a column level, in the source system, in the target system, how the data quality is. I can blend in maybe the logical names coming from my logical model to make the diagram more readable. So I can have a lot of different options on this view, which I'm not going to go too much in detail. But you guys get the idea.
So I can drill into any level of detail of the product I'm looking at, and I can see everything, like, how is it being calculated. I can even drill into the SQL code, the SQL stored procedures, or the ETL mappings, how this data is being transformed-- the quality, the definitions, the sensitive information. I can see all of them in this view. So let's go back now. Let's assume this lineage view and this mind map view gives me the confidence I'm looking for.
I'm like, OK, this is the data product I need. This looks great. I looked at all the information, and it's called standard. So what do I do next? The next thing I want to do is I want to get my hands on the data, right?
So what I will do here is go to that task. And I have different options here, right? I can send it back to Data Preparation, or I can request a new data set. And by the way, all these tasks which I'm showing you on this screen, there is a workflow behind it.
And [AUDIO OUT] have a model first approach. Like, imagine if request a new data set. It goes all the way back to the modeling.
Our architects and modelers, they look at the request. They model accordingly, and then that model gets pushed into this data intelligence catalog. We can create the mappings and the pipeline on the fly automatically, right? We can generate the ETL, we can generate the stored procedures if needed and create that pipeline, push the data from the source to target, and make it available to the end user. So this marketplace is not only here to shop for the end products, but there are workflows and processes behind the scenes all the way from the modeling to the mapping and the lineage and the automatic creation of the pipeline.
We also provision the data. We have data prep tools like Toad Data Point. You guys, I'm sure you heard the word Toad. It's really a handy, slick too. if you want to create a view quickly for a request, you can do it.
So anyway, let's continue here. What I will do here, when I open the request access, [AUDIO OUT] not going to fit in everything. But I can put a reason for access.
Like I can say, hey, my name is Yetkin. I'm working on this project. Can I please get access to this data? It will go to this assigned users.
And by the way, I have to agree to the terms of the condition that I was showing you the data contracts in the previous screen. So I have to attest to those contracts to be able to request this data. So once this request is complete, we are not going to show it live here, but we can show it after. Once this request is approved, it will automatically grant me the permission to the assets, which is deemed to be, like, maybe get access to the power BI report or maybe a file in the Azure blob directory, or maybe I'll get access to the table. It will be decided based on the nature of the request, like what I'm trying to do.
This workflow will take care assigning me the permission, and it will also deliver me the data. Maybe it's a link to the power BI report. Maybe it's a link to a file, or maybe it's just attachment, a spreadsheet attachment. Whatever is needed in this request, the data will be delivered back to me, and I can just get my hands on it and I can start working on it.
So, this is the end of our first script. And again, if we recap, so we looked at the data products, how they are constructed, how you can compare side by side, how you can drill into the details, any level of details to get comfortable to decide this is what you are looking for. And then we will not stop there.
So we will also provide, grant you the permissions, and provide you the data. And then basically, out of this product, you can do the data provisioning and also getting access to the data all in one place. So I will now move to our second script, which I will go back to our main screen. And this time, I will be looking at the data sets.
And the data sets are in general very similar to data products. The scope is a little bit smaller. In this case, you are just looking at a data set, which might be a table or a file. It's not like a big combination of different things, but it's quite handy, especially if you are looking at a third party. Again, you can look at what are some of the third party data you are working, you have available in your company.
And again, you can quickly compare them side by side just to give you an idea. And in case of third party, of course, what's important is the cost, right? So you might have equivalent or very similar data sets. But the cost might be different, right? I mean, if you are trying to access live data, it might be more expensive than if you are trying to access data from one week old data.
And maybe for your research, you only need a one week old data so you can actually save money by looking at little bit outdated data. Or if you must look at the live data, then fine, and you can request access for it too. This third party data, as simple as it looks, actually, it provides a lot of ROI once you start managing and governing the third party data. I mean, if you think about acquiring the third party dat, you can make sure you rationalize your decisions.
You make sure you only pay that much that you need because a lot of times at the companies, we realize our customers buy the same data over and over from the same vendor for different business units, and they don't even know from each other. So, like, putting them here makes it more visible, easier. And it also applies to the third party data with which you share with the external parties, your downstream customers. You can ensure the quality and the consistency of the data sets here.
So again, this second script is relatively straightforward. We are also working on integration with other marketplaces, public marketplace tools, so that you can exchange information. Like, you acquire something from a public marketplace. You can put it here so that to be used internally in your organization.
And my last use case is about AI models. So I will go again one more time to our entry screen, to our marketplace. And I will describe what we are doing around the AI models.
So I have nine AI models described in my system. So I'll pick a very obvious one. It's part of the risk domain.
I have a fraud detection. I'll just quickly go there. First thing what we do is you have a nice description of what this AI model does, right? And the nice description, very transparent.
Who owns this information and who owns this model? What is it intended to do. And again, all the policies and the data sets, I'll come back to it later.
I can get some additional information about the model. I can even go to the AI model itself. I mean, if I want to just get my hands on the Jupyter Notebooks and play with it and see what the model does, I can do that. I can see that it's in production, when was it last trained, if it's relevant information, or what's the version number being deployed.
So in a nutshell, in this screen, what you are seeing is, what is this AI model, what is supposed to do, who owns it, and is it in production? Is it being used, or is it still being developed? So let's go to the next page. Again, I'll go to our most popular favorite view, which is the mind map.
If I click on it-- and again, it gives me a 360 view of our AI model in this case. And again, like I did in the previous screen, I'll just blow it up and bring all the information in. And oh my god, what you see here is I have business policies, right, because this model touches PII information.
First of all, we are subject to CCPA. So it looks like it's somewhere in California, right? And then I have all these regulations applying to that. And you have been seeing, as we mentioned, like in European Union, in US, in China, in different areas of the world, there are all these AI regulations being published by the governments, right?
So you have this information so you know what regulations apply to this particular model. So that's number two, right? So I know what I need to fulfill, what I need to comply with.
Number three, I'm looking at the data sets I used to train this model. And I realized some of them are third party. Some of them are actually synthetic data. I'm like, OK, that's interesting.
So I'm using some synthetic, not real life, but synthetically produced data to train this model, which is fine. I mean, it's a very common thing nowadays, using synthetic information to train models, but OK. And then last but not least, I'm hearing-- I'm looking at a high risk. So if you look closer to the regulations, especially in the European Union regulations, the models are being classified by high risk, medium risk, and low risk.
And most of the time, the classification-- I mean, you can, of course, read the definitions. But in a nutshell, if your model is touching PII information, sensitive information, it's usually being categorized as a high risk system. So you can see that it's being classified as high risk system because it's touching PII.
Then you know what rules apply, what kind of need to watch out for, the bias, things like that. And this red icon, as mentioned earlier, it tells you that this model is touching PII. It might be intentional, it might not be intentional.
But by all means, you don't want bias, especially in a case like fraud detection on personal information. So that's the third thing we do is we look for all the regulations and especially the privacy regulations, and we make sure the models intentionally or unintentionally do not violate those codes. So let's look at the data itself because we say the AI models are only as good as how the data is.
And I can already see it's green. It's 84% data quality. It's great. I'm using this table to feed this AI model. I have a reasonable quality.
But nevertheless, again, I'd like to drill down a little bit further to understand what's going on. So again, once more time, we are drilling into some technical details. I can see that I'm looking at a table here, the credit card transactions table, because it's doing the fraud analysis based on this table.
And I can see the data quality, but I'd like to see the quality at the column level. Now, I can see at every column level that some columns actually have lower quality than others. And I have drift alerts.
So the fourth thing what I want to show you is the drift, right? So if I click on this particular column, it takes me to an environment. It's data quality DQ lapse environment, which I can drill into more details of the profiling and the quality. But what I want to focus is today drift because drift is so important for AI models.
So you want to be able to catch drift as it happens because if you don't catch and if you don't take measure, it might just hinder and impact the output of your model. So I'm looking at the drift on the transaction category column for this table. And I can see I have defined several drift rules, and I set some alerts.
I said, OK, if the drift is higher than this threshold, just notify me. Let me know. And by clicking on it, I can see exactly what date the drift happened and how did it happen and how it is trending.
And it's a live information, right? So this tool continuously, two or three times a day, looks for incremental changes in the data sets, looks for changes, looks for drifts. And if it goes over a threshold, it just notifies the people.
And our data scientists the data science tools, the teams which we work with, they love this capability. So they constantly monitor their input and the output of their models and watching out for a drift. So that's the end of my last part.
Again, what we did is we looked at the AI models. We looked at how Erwin Data Intelligence can help you govern them by defining them in the system, making them transparent, by articulating the owners, and then by associating them to the policies and the regulations that help you stay in compliance. It helps you by identifying any sensitive information, identifying all the data quality as well as the data drift.
It provides you a continuous monitoring around the AI models. So what we are also doing is, our customers are doing it, is as they develop their AI models, they take the data sets, and they run it through our system. Like, you have been seeing all this gold badge, silver badge, et cetera, and they pretty much certify the data fit for use for AI/ML type of implementations, right?
So it's a great process. It's really a lot of model and data focused development of model rather than just focusing on the Python code or whatever. So that's it. Sue and Jesse, that's the end of my demonstration. I'll hand it back to you guys for Q&A.
Thanks, Yetkin. Yep, there are a few questions in here, and I tried to respond to one, and I'm not sure if it went through or not. So let me start with there was one question about what is the difference between the Erwin marketplace and Azure or Databricks marketplace.
And the number one reason or the number one difference is that the Erwin data marketplace or a catalog data marketplace will go out to 100 different sources, not just inputs and outputs of Databricks or of Azure, and help you find those data sets and shop and share those data sets inside of the marketplace. It also supports both internal and external. So it's not just a data exchange of external data sets. It's both internal and external and synthetic data sets, if you'd like, as well.
And then I would say the third reason difference, main difference, is that it's been through this governance process. It's been modeled. It's been governed.
It's been curated. And you have lineage, and you're observing the data behind that marketplace. So those are the main differentiation.
So you want me to take the next question, maybe?
Yeah, go ahead.
I'm sorry. I was reading one question. Maybe I was preparing the answer in my mind. Yeah, there's a question. Are the data products course in the marketplace influenced by responses experienced from data consumers or purely algorithmic based on data quality?
No. So our data scoring algorithm is based on infonomics standards, infonomics concepts, right? It looks at three to four parameters today, but we are already improving it to look at seven, eight different parameters. It looks at where the data is coming from.
Is it coming from the right source? Is it good quality. Is it being popular, like all the ratings and stuff? Is it being fully curated? Is it complete?
So we look at all these different parameters, and our customers have the flexibility to tweak this model, to change the weight of every parameter. So in your case, if data quality is more important than, let's say, the popularity of the data set, then you can increase the weight of it as opposed to the hit count, like, popularity of the data set. And to the other part of your question, how the end users influence the model-- I mean, influence the rating-- it will not influence everything.
Like, if it is coming from the good source, it's coming from the good source. The end users cannot change it. But the end users can rate.
We have the ratings, if you remember, one to five stars. Those average ratings, they will get into the score. Like, the more five star ratings, the higher score will go. And also the hit count, like the popularity of the data set-- like, it's being very popular and everybody is using it. And it also improves the quality-- I mean, score of the data set because it makes it more critical.
There's another question. We need to create again the data in the marketplace, or we can send the data from the glossary. So yes, definitely we connect with the business glossary assets.
Throughout the system, we use AI to make that connection. So you're not recreating any of those business assets or curation. Jesse, are we missing any questions?
There are more new questions coming in, Sue.
Yes, there are a few.
So I'll take the drift question, Jesse. How does it identify the drift if the data store doesn't store the history? So actually, we store the history.
Like, our data quality tool, like said, continuously keep profiling, keep monitoring. And you can define the drift rules. Like, what are you looking for?
Are you looking for duplicates or uniques? And there are all these different, sophisticated rules you can define or the tool will help you to define those, make suggestions of those rules. So basically, the drift and the historical values of the drift are stored in our data quality part of the-- component of our tool. And hence, that's how it decides, how it finds out, if there is drift happening or not.
We have another question about personas who would interact directly with the marketplace. We definitely see the folks that are in the data and analytics area, so those engineers and analysts that need to get their hands on good, solid data, the data scientists behind the AI models who want to compare data sets and make sure that they are basically fine, good internal and external data sets to use behind those models. So primarily, it's those DNA folks or business intelligence folks that are trying to definitely create value and insights with the data sets.
Thank you. I'll take the next question. So, the question is, do you have a connector for Data IQ? We do not today, but we do actually opportunistically partner with AI/ML providers.
We do it tactically. That means we do it at the customer base. But of course, if there is a connector required for Data IQ, we can build it. So what we do is if you look at I mean those tools, they provide some governance, some tools specific for data scientists.
We are more on the data side, and then they are more on the data science, more programming and algorithm side. But a combination of both usually produces great scores. Like, I do remember one time where we had such integration is we were pulling out the accuracy and the precision, some values they were tracking in the model.
And then we just transferred it back to our product. And then we can, you know, provide you some histograms like, you know, trending dashboards saying, OK, your accuracy and precision is going like this over time. It's falling below the threshold, yadda yadda.
And it's fun to actually combine the data drift monitoring graphs with those graphs which measure the accuracy, precision, or other parameters of the model. So you can see when drift goes down, like, two weeks later, the accuracy goes down, et cetera, et cetera. So it's fun.
It's good to work. We haven't productionized an integration yet, but definitely it's the direction we are going right now. And I'll take another quick question, Sue.
Can you use Erwin to document files in data lake? Yes, absolutely. We have connectors for data lakes. We support pretty much almost all the file types supported in those data lakes. So we will be able to scan those files, bring them in and provide you the metadata and document them.
There's another one about the six dimensions of data quality-- uniqueness, consistency, accuracy. Yes, yes, yes. We do measure all six of those, and you can create your own as well. Did we hit them all?
Yeah, I think so. The other comment is from Jesse. Definitely a big mention of Doug Laney and the infonomics here.
Thank you, Jesse, for bringing it up. Definitely-- I mean, you see where we are going, right? I mean, all these data scores and calculating these things, they will eventually turn into monetization.
Like, those scores will turn into money values, and then you can start associating the money values to the data sets, et cetera. So that's definitely that's the same principles and same direction we are following. I think, Jesse, we are good, and I think we are at the end of our time, right?
Yes. Thank you for your presentation today, Susan and Yetkin. Everybody, if you'd like to scan the QR code on the screen if you'd like more information or to get in touch with our team. Just a reminder that we will send you a link to the recording within the next few days. Thanks for joining us, everyone, and have a great rest of the week.
Thank you.
Thank you.