Breakout 4: Muni Market Regulators and Artificial Intelligence Considerations

We will examine AI-related regulations and developments in the evolving muni market.


Transcription:

Leslie Norwood (00:11):

So if you're here for the panel on muni market regulators and artificial intelligence considerations, you're in the right place. Thank you very much for joining us today, and we're here to examine AI related regulations and developments in the evolving muni market. As you all know, there's been some ancillary mentions of AI at this conference. Notably, its impact on the demand for electricity going forward. As all things AI are changing very quickly, its impact on practices in the muni market and the regulations that govern those practices are still rapidly developing. My name is Leslie Norwood. I'm Head of Municipal Securities at SIFMA and the two experts joining me today on this topic need no introduction. Emily Brock, Federal Liaison for the GFOA and Dave Sanchez, Director of the Sec's Office of Municipal Securities. I guess I'm the one who's going to have to kick this off.

(01:11):

So just recently SIFMA worked on a paper entitled Promoting Investor Success Industry Innovation and Efficiency with ai. So we've looked at this from the broker dealer point of view, regulated party point of view in light of the attention that AI has gotten in financial services over the past few years. And we noted that the federal and state policy makers and regulators are continuing to assess the impact of AI on financial services. And so we wanted to come up with our own points ahead of further regulation in this area to try to hopefully guide regulators in terms of the industry's viewpoints on this. And so we acknowledged the increased efficiencies from the application of AI, and we also acknowledge that these new technologies may present certain risks, but we do have well-established legal and regulatory governance frameworks in financial services firms because of the high degree of regulation to address these existing risks and the new risks regardless of what technology is used.

(02:33):

So our key theme here is that we call for policymakers and regulators to apply the existing risk-based rules and guidance for the deployment of AI and any other new technology in the markets rather than engaging in any technology specific rulemakings that will likely be outdated before they're finalized. And so our thinking here is that the use of AI is not new in financial services. The advancements in AI with regards to generative AI have really heightened the interest here. But again, we think that a technology agnostic approach will encourage innovation within the industry without sacrificing safety in the financial markets. We feel strongly as regulated parties that there is no need to adopt a precise definition of AI at this time because AI is evolving technology and technology really changes all the time. So as long as we're agnostic about this approach, it really doesn't need a definition. Policymakers and regulators should collaborate with regulated financial services firms to understand the uses of AI and its related benefits and risks. Only after that should additional regulatory action be considered if the existing laws and regulations do not address the novel risks that are identified.

(04:11):

Any regulatory risks should be flex. Any regulatory action should be flexible enough to continuously adapt to evolving technology because prescriptive rules can lead to inconsistent regulations across jurisdictions and will also deter innovation that could benefit all market participants, including the investors existing laws and regulations should recognize that management of different financial services firms are best positioned to identify emerging risks and the impact they could pose on their businesses. And that firms should continue to retain this flexibility when determining how to address the use of AI and other emerging technologies. And finally, that policymakers should assess how often existing areas of law and regulation apply to the use of AI in the financial services industry and consider strategies for mitigating any potential risks, including in the areas of federal data privacy legislation and copyright ownership. And so that is the viewpoint from the regulated entity. With that, I turn to you the policymaker at the SEC. Dave, do you have any thoughts from your chair in the federal government?

Dave Sanchez (05:33):

Sure, thanks Leslie. And I apologize, I have to give this disclaimer again, but the views I expressed today are my own do not reflect the views of commission, other commissioners, other staff at the commission. I think folks can say that along with me, but this point, so I think this SIFMA white paper that Leslie referenced is actually a really good read for folks who are kind of thinking about how to approach regulation in general. And I have a couple of reactions to it, but I think first big picture, I think most of the time when you think about the way regulation works, it really does wait for issues to arise. And I think already there has been consideration by the SEC about AI in a variety of context, not necessarily Munis. And certainly for example, some of the cases that have been brought, the enforcement cases have been brought are done under existing rules, right? Because when you think about in particular, fraud as fraud, right? It doesn't matter if you're doing it on paper, you're using ai, the same principles apply.

(06:38):

So generally speaking, I think these notions of being technology neutral and all these things are actually generally the correct way to go. And I think historically regulators may not have always done that, and certainly not always at the SEC, you can think about how the SEC had to do some unwinding after Dodd-Frank of references to credit rating agencies, for example. And for something like technology, it could be even worse, right? Because you just lock in something that becomes outdated. And so I think there's been a real learning just in general from regulators that you don't want to lock yourselves into that. I think there ends up being some tensions though occasionally, and the tensions are, does the existing regulatory framework actually work in this AI context? And I've seen that in different contexts, but the same sort of question, of course, the regulators might not think it does and the market does think it does.

(07:39):

And when I think about the rules that have been built up specifically in the muni market and specifically about things like trading, some of these go back to the seventies, you know what I mean? And a very prescriptive era also of rule writing, particularly when you're talking about some of these older MSRB rules. So it really may be that regulatory framework needs to change because it was just built for a different era and also an era where you had much more prescriptive regulation. And then there's always this tension between principal based rules and prescriptive rules. And certainly when I was in private practice, we always wanted principal based rules, but then principal raise rules, make people really nervous, and then they start asking for more prescriptive stuff like, please give us more guidance. Please give us,

Leslie Norwood (08:28):

We want a checklist.

Dave Sanchez (08:30):

Yeah, exactly.

Leslie Norwood (08:30):

We want to know that we're good.

Dave Sanchez (08:32):

I want to know if I'm the right side of the line and wrong side line. You're not comfortable with the ambiguity and also very understandable, but you immediately get that kind of tension when you have principal based rules. So I told Leslie I thought kind of big picture, the sip and white paper in terms of identifying how to approach regulation to me made a lot of sense, particularly this emphasis on being technology neutral. But I could see there being issues with one, does the existing regulatory framework really capture this stuff? Yes or no? And then two, just that tension between principal based and prescriptive based rules.

Leslie Norwood (09:08):

I think what I'm taking away from this is that the industry and the SEC agree that they should be technology neutral on that. I can stop there, but we agree on something. So I think this is something positive moving forward.

Dave Sanchez (09:21):

I mean, again, I'm only speaking for myself, but I will underscore that point. But yeah, I think you can really box yourself in and considering how hard it is to change rules and how hard it's to change regulation, I think it's something that people have to be really cautious of, particularly with rapidly evolving technology.

Leslie Norwood (09:40):

Thanks, Dave. Anything else that you want to point towards regarding the SE C'S thinking on this? Or should we turn to Emily?

Dave Sanchez (09:47):

Let's turn to Emily and see what she's, see what her folks are doing.

Emily Brock (09:51):

Yeah. Well, as an unregulated entity sitting beside a regulator, I feel like this is a little therapy. I can just kind of say whatever I want. Just kidding, I can't. But I did want to talk a little bit about AI in state and local governments and all political subdivisions in between all issuers I could say. So AI is starting to creep its way into a lot of different functions of local government. I think folks here are really interested in how it creeps into disclosure and in particular the use of AI in reconciling information and giving information. But I do think it's important before I even start about that, that AI is being utilized by communities across the country to enhance service delivery. Remember, they have jobs too, and so they have to make sure the toilet's flush. They have to make sure the refuse is collected.

(10:47):

I mean, there are ways that AI has started to help communities across the country do what they need to do in order to be governments for their citizens or their communities. But when it comes to financial information in particular, we've seen of course in some communities across the country, very few communities across the country, how is AI incorporating or being incorporated in budgeting practices and treasury practices? And in particular, GFOA has started to look at reconciling financial information and extracting information from your annual comprehensive financial report or your ACFRs. And we have a project right now where we're working with Rutgers to try to figure out how we write the rules to extract information from the ERs that are produced and reconcile that information in a way that's helpful for the public, for investors, but also for stakeholders who use that information. Now, that's been a really fun and interesting chat done by a lot of young people, which incidentally is great because I think there's a way to sort of infuse folks who are learning inside of universities to figure out what municipal finance is, but also how might we advance, and in particular, what questions are we asking the financial information and how is that evolving as a topic?

(12:20):

But speaking of that, again, as an unregulated entity, it's really important to note sovereignty is expressed in the 10th Amendment to the United States Constitution. It's interpreted further in the Tower Amendment, but nevertheless, a bill passed Congress called the Financial Data Transparency Act, and I blessedly turned in R-G-F-O-A letter by the deadline of Monday. So I felt a big, huge sigh of relief, but it wasn't just GFOA commenting on the Financial Data Transparency Act. What the Financial Data Transparency Act tries to do or attempts to do from the legislative text is it asks for inputters of financial information into repositories across the spectrum of a lot of different federal agencies, financial agencies, to collect that information in a structured data format. So you might want to ask yourself right now, is AI structured data? It's not, but could it be structured data frameworks are often static, and we definitely don't want to make investments that are going to shortly become outdated, whereas AI is a little bit more dynamic.

(13:35):

There's certainly a role for AI in the conversation about how do we make financial information searchable, machine readable, and open source. Now that said, thinking about the Financial Data Transparency Act, when I did drop off my letter, I was very happy that it's now in Dave's court and I also know, and I want to note that they are in active rulemaking, so he officially can't say anything. So now is my real time in court. So in the letter, we of course note, as I said, the beauty of the construct of the Constitution in the Tower Amendment and also noting important parts of the Financial Data Transparency Act, that note that governmental entities equating them essentially to financial entities, which we are not. And I think that's an important distinction as regulators are thinking about where we are going with this structured data conversation. Second, we also want for the SEC to make sure they don't impose any unfunded mandates.

(14:38):

Think about that in the form of structured data as well. If there is an investment, a hard dollar investment in purchasing software, they can't do that. However, if a community has to comprehensively change the way that they produce financial information, that's going to require consultation, that's going to require teaching your employees how to do it. It's going to cost the organization in order to comply with an unfunded mandate. Third, we ask for the SEC to consider the challenges, the redundancies and unfunded mandates when you take something that is existing in the corporate sector and apply it to the bespoke municipal sector. That's obviously a critical element. And we also tell the SEC that they have to stay true to the law to make sure that they scale the implementation of the FDTA consider as issuers of municipal securities. When we pull together information, we're talking about states, we're talking about universities, we're talking about airports, we're talking about the mosquito abatement district of Southern California.

(15:45):

All of them have to fit within this regulation. And so the question is how do you scale this? And we certainly want to be at the table with the SECU as they're considering that. Also, we make sure that the SEC remembers in the law it is written that they need to consult market participants, not only issuers, but also other market participants, including those who are concerned with disclosures in our space, deeply concerned such as bond lawyers and also in addition to that municipal advisors. And the list goes on and on. So making sure that the SEC understands that. Now, one commissioner asked Tesser Purse asked in the dispatch of these proposed regulations, she asked, what role does AI play in the Financial Data Transparency Act and the evolution of it? What can you tell me as a commissioner of the SEC, what I need to know about whether or not AI can effectively do what the FDTA is asking?

(16:48):

And right now, from GFOA's perspective, while we are on the cutting edge of scraping data off of a first and creating pools of information, we don't think it's quite past the test yet. It's not quite ready to be FDTA compliant first of all, because it doesn't fit the definition of structured data. But second of all, there's still challenges with scraping data and then in addition attestations, that is, those are materially effective ways of communicating to the public. That's your financial position. So you have to be really careful right now of AI in the space, but we also at the same time are concerned about investing in static technology. So AI is, we're in a love-hate relationship with it right now, not as much as you two agree. We certainly don't agree necessarily that AI's ready to perform that task, but we are exploring it heavily.

Leslie Norwood (17:50):

Well, I'm going to disagree with the SEC as well on this while I have the opportunity and he can't really talk about it because it's active rulemaking, but SIFMA two submitted a comment letter on Monday with regards to the FDTA, and as you point out, there is a lot of prescriptive data points and there's the discussion about adoption of legal entity identifiers and potentially moving from CUSIP to FIGI in terms of a security identifier. The base thinking behind our letter is that it fails the cost benefit analysis and that more communication and investigation with the industry before anything moves forward is critical. We're talking about some of the data points that run through all of the MSRB rules regarding reporting of securities with a CUSIP number. There would have to be many different rules changed should that go forward. And so there are challenges to these types of prescriptive rules. Can AI get to where it needs to be by the time the F-D-T-A-F-D-T-A regulations are due to be put into place four years from now? I mean, I think with the change in technology and the speed at which things are moving potentially FDT AI could replace these types of data points and be able to effectively extract data. And so I think there are a lot of challenges that the SEC is going to need to think about as they analyze all the comment letters that they received on Monday.

Emily Brock (19:36):

I do like to talk about the SEC as if he's not here.

Dave Sanchez (19:40):

I was going to say,

Emily Brock (19:40):

We're just talking through him. He's not here.

Dave Sanchez (19:44):

I'm glad you guys say AI every so often so that we're clear that we're in the right panel. But actually going to, one other thing that Leslie said, which was kind of interesting to me was last month I was on a panel and folks were talking about AI and disclosure, and the one example that was brought up was really more about public employees use of AI for drafting reports or something. And I had kind of offhandedly said, well, it's hard to think that that's really material, but if you're talking about other things like projections or budgets or something, that's a whole different story. And at the time I said that I really didn't think anybody was doing that, but then reporting in the Bond Buyer just literally a week later from one city in Ohio that does that, and also various vendors providing this type of AI for budget development, and those get into actual real risks and real potential risks when you start talking about are you appropriately monitoring this technology? And this is something that Leslie's folks know is kind of fundamental for regulated entities is that anytime you have an outside vendor, you don't absolve yourself a responsibility by saying, oh, we've subcontracted to these people. You are responsible for knowing what they're doing and kicking the tires on what they're doing. And especially when you start thinking about the potential things that could go wrong when a municipal issuer is using AI to develop budgets and projections, especially at this time, that's really something that is of interest.

Emily Brock (21:29):

Well, also, interestingly, and I'm thinking about GFOA's best practice on ESG, which Dave thinks is awesome, which is terrific, but we do say in our best practice that the finance officer needs to get outside of their cubicle and make sure they understand what is happening outside of the finance office. Now if you think about an organization, a public organization like a state or a really large regional jurisdiction, the volumes of information that they produce about the things that they do are huge. And so AI certainly does or could have the potential of finding specific information that would be relevant to or could potentially after a discussion with bond council be relevant to making disclosures on. And so what an interesting thing to come into our space to be able to harness technology to be able to scrape and pull tons of information about what's already happening for you and how you might be able to disclose that. So it certainly is has potential.

Dave Sanchez (22:36):

Well, and even one of the things that was reported in that Bond Buyer story was that this particular entity was using, in addition to everything else, this particular entity was using AI for one of the things I talked about this morning, which is checking to see how their neighbors were doing climate disclosure and sort of pulling from that. So that also was very interesting. I think if you look at the speeches across the board from the SEC on AI, there's a real recognition of anything else. There's a lot of potential for this to be helpful, but similarly also a lot of potential for risk and one additional way to engage in fraud that you didn't have before, but also on the flip side of course, and a lot of potential for it to be helpful.

Leslie Norwood (23:20):

I am going to pivot unless you have anything else on this point, we can circle back to it certainly. But I did take this opportunity to look over some state laws in this area. I know we've talked a fair bit about what the SEC has taken a look at, and we've taken a look at what municipal securities issuers are doing, but over the past few years, there's been a number of bills in the states that would require states to set up commissions to study ai, require employers to identify and rectify any biases in the AI technology that they use for recruiting purposes, and put guardrails on how state agencies and entities interact with ai. This year, however, we began to see proposals in the states to regulate AI in the private sector with regards to regulated entities in particular. So specifically, we've seen a fair amount of state bills that would define AI and generative AI require deployers and users of AI to disclose when a consumer is interacting with AI, require deployers of AI to run bias assessments of their technology and report the results on their public website or to the attorney general and require employers to use AI in their recruiting practices or use AI when making a consequential decision about an employee or a customer to let the individual know that AI helped make a decision about them and give the person a right to appeal or correct the data that the company used.

(24:57):

To kind of dive into it a little bit more this year, California, Colorado and Utah, all enacted laws to regulate AI here in California, SB 10 47, which was vetoed by the governor on September 29 and is with the Senate right now for consideration of the governor's veto and also AB 2885, which was passed into law here, California adopted new laws which define both artificial intelligence and automated decision systems. The law SB 10 47 that the governor vetoed would've required developers of AI models and those providing the computing power to train the models to put safeguards and policies into place to prevent critical harms. He argued that while the bill was well intentioned, it applied stringent standards to even the most basic functions performed by large systems and announced other initiatives. So this would govern any corporation including financial services, corporations that are using AI in terms of any sort of interaction they had with the public or clients.

(26:12):

So it's fairly broad based Colorado SB 2 0 5, where Colorado enacted the comprehensive AI bill, the first one in the country, and it has an effective date of 2026, and we are expecting a cleanup bill in 25. This new law provides that a deployer of AI must notify a consumer if it uses high risk artificial intelligence systems to make consequential decisions, which include access to employment, employment opportunities, and developers must use reasonable care to avoid algorithmic discrimination in high risk systems. And so we really anticipate that other states will look to introduce bills that look like this in the next year. In Utah, they enacted a law that requires a person who provides a service or regulated occupation, financial services likely to prominently disclose when a person is interacting with generative AI in the provision of that regulated service. And that law became effective on May one, and finally in Connecticut there's SB two. Now this bill passed, the Connecticut Senate did not pass the house before the legislature adjourned, but we do think that maybe this law is going to be used potentially as a model bill for comprehensive AI legislation next year. So we're keeping an eye on it, and we do expect that the states will continue to look at this in terms of risk assessments and future regulation going forward. I don't know if anybody's got any thoughts on state regulation.

Emily Brock (27:59):

I think it was inevitable. I think the inevitability of it is obvious, I think to everybody sitting here who is interested in AI, that along with AI, we want protections in place and who provides protections well, the state government provides protections, theoretically, regulators provide protections. And again, as local governments, as creatures of the state, the protections that the legislatures are looking to provide actually are good for governmental operations. I mean, you theoretically couldn't get bamboozled by a bot and you could instead know and understand what it is that has reconciled that information. You can read it more, well, you should have read it more thoroughly no matter what. But certainly making sure that, again, the intentions here I think are very interesting, but to one of Dave's points earlier, we know ai where it is right now, we don't know where AI can go. So a specific definition of AI in these laws might not be super effective. To keep something static in this evolving world is certainly one of the challenges I think of policymakers across the country.

Dave Sanchez (29:16):

Yeah, I mean, it was interesting. I mean, obviously some of the laws you're talking about here are much more broad based than anything that we would do, but that there was so much definition around what is AI, what is generative AI? And at least from my perspective, those are the kind of things that can really box you in and create issues as you try to regulate on a longer term horizon.

Leslie Norwood (29:41):

Anyone have any questions? We've kind of rambled on for quite a while about different aspects of AI. This is developing very quickly. Anybody have any thoughts they want to share or questions? How about a show of hands while you come up? Do you have a question to ask? Nope. Nope. Okay. How about a show of hands? Who's tried to use AI? Wow, that's impressive. So pretty much about 80% of the people have tried their hand on it in terms of using AI in your business, whether you're an issuer or whether you are a financial services professional who's using it regularly for their business purposes, a lot less. And I'm assuming that your corporations or your governmental entities are developing policies and procedures to govern that. I'm seeing a lot of head nods. I think one thing I'd like to point out is there's a lot of different governmental entities, state, federal and whatnot that are considering all these rapid developments.

(30:48):

I found it interesting that the Department of Justice, which releases its thoughts about corporate compliance programs periodically, that it updated its statement on evaluation of corporate compliance programs and it can be kind of extrapolated to cover all sorts of compliance programs, but they updated it to focus on risks that you might not be thinking about, including risks about your AI and how these different tools might open up additional cyber risks and others. But also I think you can potentially use AI to try to catch different things and use them as a tool in your compliance program. So I think it's a risk and a tool in both thoughts there.

Dave Sanchez (31:41):

Yeah, I mean also kind of interesting is definitely this ability to be kind of transformative to the industry, and I think in more informal conversations that I've had with people in the industry about AI in general, and again, if you look back at that conference at the beginning of the month, that was more what the focus was. People have talked about its potential impact on credit ratings. People have talked about, and actually Dr. Marlow talked about it yesterday morning in the CDAC pre-conference about its potential in simplifying disclosure. And that AI had been used to simplify disclosure, but also as a pricing service and also sort of taking away some of the functions that are done by human beings now, which is something that is a consideration for AI across all industries, but especially our industry. So those are more of the kind of informal conversations that I've had about it where its ability to really transform and upend the industry, and that is also very interesting from both just a policy but also just a personal viewpoint.

Leslie Norwood (32:49):

So we're going to get less personnel costs, but more power costs because the computers are going to do all the thinking for us.

Dave Sanchez (32:57):

Yep. It's kind of like the matrix.

Leslie Norwood (33:03):

Are you Neo? Anyone else have any thoughts or questions that they want to share? We're five minutes short, but do you have anything else that you want to share?

Emily Brock (33:18):

No, I think GFOA remains hopeful and also thoughtfully pessimistic in some cases in the appropriate places, but I think obviously our market is sort of characterized as one that doesn't like to advance, and I personally think that's not true. I think advancement occurs when there is a benefit, and in particular from an issuer perspective, we have to look at efficiencies and cost savings. So if AI can be effectively deployed and it is open source and it is free, then I see that coming in a very effective way. But the proof has to be in the pudding that it increases efficiencies for issuers, it doesn't add additional costs or unfunded mandates, and at the same time, it allows us to see pricing difference as well. So that's all I'm asking for. It's not a lot.

Leslie Norwood (34:21):

I mean, free is always relative, kind of like you have your Tesla that doesn't use gas, but it use electricity, and somehow that electricity has to be created somewhere. It's just further downstream. So you don't have a direct cost for that AI, but you've got a cost somewhere, right? Somebody's got to pay for it. Right?

Dave Sanchez (34:41):

Yeah. I actually would say I kind of disagree that market moves forward when it makes sense on an efficiency basis. I think there's definitely always a lag and always people hanging on for dear life to the way things used to be, which creates actually more disruption in the market when things ultimately move. But that's my editorializing fair, that particular issue. Fair?

Leslie Norwood (35:05):

Yeah. Well, I mean, change has cost, right? I mean, I think inevitably as things develop and inertia disappears, right? There's definitely going to be costs associated with that. And I think Emily is talking about the balance between the costs, but yeah, new technology definitely is going to cost no matter what, right?

Dave Sanchez (35:25):

Yeah.

Leslie Norwood (35:26):

Things age out with that, I think. Any final thoughts, Dave? We're good.

Dave Sanchez (35:32):

I'm good.

Leslie Norwood (35:33):

All right. I hope that Dave and the SEC are thoughtful going forward about anything they may do in AI.

Emily Brock (35:40):

As do I.

Leslie Norwood (35:41):

Certainly. We're keeping an eye on the states and what's going on there, and hopefully we'll be back to you next year and see what the developments are in this area. Thank you very much. Thank you.