BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//hacksw/handcal//NONSGML v1.0//EN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260429T054720Z
DESCRIPTION:Click for Latest Location Information: http://dgiq-edw2026.data
 versity.net/sessionPop.cfm?confid=165&proposalid=16653\nData teams are at v
 ery different points in their AI journey &mdash; some are already building 
 agentic workflows and running into hard questions about governance, central
 ization, and organizational fit; others are still waiting for approval. The
  challenges look different from where each organization stands, but the sol
 ution is the same: a governed, centralized platform powered by agentic AI p
 rocesses, validating data at scale with documented, transparent results lea
 ding to data trust.\n\nFor organizations running complex reconciliation pro
 jects, managing ongoing data quality monitoring, or validating data across 
 a migration, the stakes of getting this wrong are high. Ungoverned automati
 on creates the illusion of coverage without the auditability, consistency, 
 or human oversight that regulated industries and data governance programs a
 ctually require.\n\nIn this session, Jonathan Agee and Matt Agee walk throu
 gh how the Validatar platform puts governed agentic AI into practice &mdash
 ; using metadata-driven test generation to automatically profile and scale 
 validation across thousands of tables, automated source-to-target reconcili
 ation to verify data integrity across systems, native integration with data
  catalogs to keep quality aligned with your governance framework, and conti
 nuous monitoring across development, QA, and production environments. Drawi
 ng on real-world implementations across insurance, healthcare, and financia
 l services &mdash; including one organization that validated 2,300 tables i
 n a single week and another that uncovered 12 previously undetected critica
 l defects across 20,000 columns &mdash; they&#39;ll share a practical frame
 work you can bring back and apply immediately.\nTakeaways:\n\n
 Why governance must be designed into agentic AI workflows from the start &m
 dash; not added after something breaks\n
 How metadata-driven agentic automation scales testing and reconciliation wi
 thout sacrificing auditability\n
 What a governed monitoring framework looks like across development, QA, and
  production\n
 How to build the case internally for governed AI automation of data quality
  validation with data governance and compliance stakeholders\n
 Real metrics from regulated-industry organizations applying this approach a
 t scale\n\n
DTSTART:20260505T144500
SUMMARY:Validation at Scale: How Governed Agentic AI Unlocks Data Trust Thr
 ough Testing, Monitoring, and Reconciliation
DTEND:20260505T151459
LOCATION: See Description
END:VEVENT
END:VCALENDAR