
Medianama invites you for a roundtable discussion on AI and Fintech in Bangalore on April 24. The event will explore key regulatory concerns, including algorithmic explainability, safeguards for accuracy and human agency, bias in credit algorithms, scalability of harm through personalisation, and governance mechanisms for AI accountability—particularly in third-party and cross-sector deployments.Join us for an insightful discussion with the fintech sector’s most important stakeholders, alongside a variety of policy experts, researchers and entrepreneurs.
Register NowIndian regulatory bodies have been gearing up to govern the widespread adoption of AI in the fintech sector, identifying potential risks and assigning liability. In December last year, the Reserve Bank of India had formed a committee to create a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI). The committee’s focus was to develop a responsible AI framework for the financial sector, aiming to ensure fairness, accountability, and data privacy while addressing risks like algorithmic bias.
While the central bank noted that Artificial Intelligence (AI) had the potential to automate complex processes and improve efficiency, it came with risks like algorithmic bias and data privacy, saying that “to harness the benefits, it is critical to address the attendant risks early in the adoption cycle”. Accordingly, the committee was set up to “recommend a robust, comprehensive, and adaptable AI framework for the financial sector.”The terms of reference for the committee are:To assess the current level of adoption of AI in financial services, globally and in India.
To review regulatory and supervisory approaches on AI with focus on the financial sector globally.To identify potential risks associated with AI, if any and recommend an evaluation, mitigation and monitoring framework and consequent compliance requirements for financial institutions, including banks, NBFCs, FinTechs, PSOs, etc.To recommend a framework including governance aspects for responsible, ethical adoption of AI models / applications in the Indian financial sector.
Additionally, the Securities and Exchange Board of India (SEBI) has mandated that regulated entities remain fully accountable for AI systems they deploy, including compliance with laws and data privacy. Keeping these developments in mind,We intend to explore the following issues:Algorithmic Explainability: Defining thresholds for meaningful customer-facing explanations, and understanding challenges with meaningful model explanations. What if the explanation is considered illegal? What’s the threshold for a good explanation? Identifying Safeguards: Ensuring consistency and accuracy.
Preserving human agency in algorithmic decision making. Addressing systemic risks from flawed AI at population scale.Bias in Credit Algorithms: Challenges with establishing bias and preventing discrimination against credit invisibles and algorithmic blind spots.
Regulating self-learning algorithms and for self-learned bias.Scalability of Harm – Personalisation vs. Privacy: Managing data risks in hyper-customised financial services, especially with account aggregators.
Governance Mechanisms: Allocating accountability across AI system components. Regulatory accountability in third-party and cross-sector AI deployments. Certification of algorithms.
Date: April 24, 2025Venue: To be AnnouncedTiming: To be AnnouncedWHY SHOULD YOU ATTEND?Here are 5 reasons why, if you’re in technology/policy, you must consider attending MediaNama discussions:Share your point of view and get instant feedback: At MediaNama discussions you get to share your point of view, instead of being talked to by a panel: over 60% of a MediaNama discussion is interactive and focused on debating issues with the participants in the roomful of an audience curated for relevance.Meet others working on the same subject: We have dedicated networking time at MediaNama discussions, where you can meet and engage with others working on the same subject, and get a sense of what’s happening behind the scenes.Hear a point you hadn’t considered: We curate our participants for a diversity of perspectives, with people working on the same issue, but coming from a different background: hear not just from policy wonks, but also technologists, founders, people working with lawmakers, among others.
There’s always another way of looking at something.Outcome-focused: At the end of each session, one of the key questions we consider is a wishlist: what should change, why, and how?Breadth and depth of understanding: At MediaNama we are focused on providing our readers and attendees with both the depth and breadth of understanding of technology policy. We work hard to determine the probing questions that lend themselves to a deeper understanding of issues, discuss principles, and bring out perspectives that you might not have considered.
You can ask questions too. Sometimes the questions are more important than the answer.Registration does not guarantee attendance.
In case you fit our mandate for the intended audience, we’ll send you a confirmation before the event.Your contact information will never be shared with anyone outside of MediaNama.The post AI In Fintech, April 24, Bangalore #NAMA appeared first on MEDIANAMA.
.