Dakota Mornings Radio Show AI Discussion Transcript (With Michael Bell, Mark Hagerott & Karel Sovak)
It was an honor to be invited to co-host Dakota Mornings with Michael Bell! Here’s the transcript for your review.
In case you don’t have time to read the entire transcript, you can get a quick overview of the highlights with these quotes that ChatGPT spotlighted. Read it HERE
Michael: This is Dakota Mornings. I am your host, Michael Bell, KFYR 550 AM, 99.7 FM. We have Danita Bye and Chancellor Mark Hagerott in the studio talking about artificial intelligence.
Let me give you a little bit of background on Danita. She is a leadership and human development coach. She has been in leadership and sales development for 20 years, contributing author to Forbes Council and to Harvard Business School MBA students. I could go through her resume for days. The life she has led and the leadership she has provided in a wide spectrum of opportunities is nothing short of remarkable.
Michael Bell, Host of Dakota Mornings Show
I first got to know Danita through her leadership. She is on the North Dakota Petroleum Council Board. Danita, thank you for taking the time and come in studio and taking the opportunity to share a little bit about your experience as well as a deep conversation, which we’re going to have today on artificial intelligence, and we have a great lineup for that. Danita, thank you.
Danita: I am honored to be here and hoping, I understand, that this is an inaugural co-host, and I hope we’ll have so much fun and engagement that we’ll do it again in the future. I think we have somebody else joining the studio this morning to kick off the show, and I’m going to take a moment and let Danita introduce our first guest, Chancellor Mark Hagerott. Danita, why don’t you go ahead and give us some background? And then I want to talk about this conference in late September at Valley City State University and some of the takeaways from there.
Danita: Thank you, Michael. As a background, several years ago, I wrote a book, Millennials Matter, Proven Strategies for Building Your Next Gen Leader. That title encapsulates one of my passions, which is how we train, lead, mentor, and coach the next generation of leaders. It’s been through that similar passion that I’ve met Chancellor Mark Hagerott, and I’m going to let him share his background specifically as it relates to AI cybersecurity. Mark, you’re on.
Mark: Thank you, Danita, for inviting me. And it’s been an honor to work for her as the chancellor. I know she’s here in her private capacity today, but she’s on the State Board of Higher Ed. My background was years ago in the Navy. The Navy is a very complex, advanced technology organization. And I was on several ships that had low-level AI. You could turn a switch, and what was called artificial specialized intelligence would take control of the entire ship because the speed of the weapons was just beyond human capacity. And we’re seeing this now, obviously, in Ukraine and the Middle East. Later, I was a cybersecurity professor at Annapolis. Just this morning, I kicked off the conference at UND on cybersecurity. There is a big, big event in the upper planes today.
Building on that, AI just began to naturally evolve out of those studies and classes. I’m teaching a class at NDSU with Dr. Cooley this semester called AI in Society, the History and Philosophy of the Human Machine. The other part of my background that’s relevant, and I know that Ms. Bye wants to get into this, given her background, is that I’m a former propulsion energy distribution nuclear engineer. The whole question of energy and computation was a theoretical challenge for about a hundred and some years. And then finally, scientists proved, which we all know is just a reality, that information derives from energy. But people didn’t know that for a long time. They thought information was free. That’s my background between the Navy, teaching cyber, as well as being a nuclear engineer. And it all comes together in this huge event that will probably be the most important in the history of this planet: the emergence of intelligent machines.
Danita: On September 26, an inaugural AI conference was held at Valley City State University. The subtitle was Being Human and Working in the Age of AI. You participated, but I’m curious about some of the key takeaways you took from being involved in that conference.
Mark: It was a great event, the first event in the state system’s history because it involved all 11 campuses. University of Mary co-hosted that, and I think we’ll have my fellow organizer, Dr. Sovak, who will be joining us here shortly. Monsignor Shea and I were on the stage. He gave a very thoughtful perspective.
Danita: We’re talking with Mark Hagerott. This is Danita Bye; in case you’re just joining us. We’re having a conversation about the inaugural North Dakota AI conference that was in Valley City a couple of weeks ago, in which Mark was engaged. And Mark, my question is, you have such an extensive background with cybersecurity and AI, but I’m curious, what were some things that popped out for you?
Mark: It was just a great event, 11 campuses, U of Mary, speakers from across the country. The first thing that came out is how to frame what’s happening because it’s on the radio every day. We’re here, of course, today.
The news, national commissions are being held. The Russians have announced there’s an AI arms race. Whoever wins it will conquer the world. I mean, these are serious times. But I guess the best thing that I saw first was how to frame what’s happening, that this is what is called a general-purpose technology revolution. And there’s been three others.
The first one was the industrial revolution with the creation of engines, steam engines, and gas engines. And think how it changed North Dakota. The second one was electrification. Think of all of our co-ops. My dad was born in a home with kerosene lamps and no running water. And when they got the REA to come out, his life changed. Then, our parents, and even probably my early years, was the invasion of the computer. That’s the laptops and the Internet. And the fourth big one is going to be AI, or it is now AI, in particular, generative AI.
That gave a context because there’s lots of doom and gloom, like, oh, my goodness, it’s, you know, going to come after us. No, it’s just this next mass of disruption, and it will have huge impacts. That was part of what we talked about, what could be the impacts of that. But I’ll stop there in case you wanted to follow up on anything I’ve said so far. Well, Mark, I’m keeping notes here. So, I wanted to make certain I got this. You called it a general-purpose… Technology. A general-purpose technology. And what was the first one? Well, the first one was the invention of engines.
Think about that. You know, Britain would never have become a world power if they couldn’t pump the water out of the mines in England. They were flooding, and all the trees were gone pretty much, and they had to rely on coal. A guy named Newcomen and then Watt invented the steam engine. Then, it became the gas engine. Basically, it replaced the horse. The COO of Microsoft, Brad Smith, wrote his book. He said, you know, the time the horses lost their jobs, it absolutely changed America when engines replaced, what do we call it, horsepower. That’s how it was measured, horsepower. How many horses did it take to pump X amount of water out of wells? And we still use the old phrase, but, you know, back to the impacts with Brad Smith and his book. In fact, I’m just teaching this chapter with the students at NDSU last week. It had second third-order effects that he links to the Great Depression because it so disrupted our economy, and we weren’t mindful of the effects of the engine, basically.
So, the first one was engines. The second one was electrification. Before we had electrification, you had to have all the factories downtown. They had to be around, quite frankly, in Minneapolis, where the water came over the cliffs and drove the turbines, right? And they could now put it out to the suburbs. And then the third one was ICT, information communications technology. Think of laptops, iPhones, the Internet. And most serious people say the fourth big one is generative AI, what we’re now seeing. So, we’re living in exciting times.
Danita: Mark, I know that there are others who have said this before you, but one of the first papers that I read, or maybe one of the first talks, is you talked about both the promise of AI and the perils.
Mark: So, talk a little bit about what you see. Maybe what are a couple of top promises and then a couple of top perils? Well, we’re already seeing the promises. And I very carefully distinguish between technologies that can undermine mental health and student health. I’ve actually, one of my students is a future Army chaplain, and her whole field is kind of what’s happening to the mental health of young people. So, we’ll get to that in a second.
But on the positive side, it’s absolutely incredible. Just like engines were able to basically save Britain and increase productivity, electrification allows us to move different places. And we would never have space travel, air travel, electrification, and, of course, computers. But most recently, AI has already been shown to be saving lives in Africa. There are not enough doctors to read chest X-rays. Prairie Public featured the story of already increased in diagnosis and then medical care for people with tuberculosis that otherwise would have died. So, AI can read these x-rays and immediately identify who needs to get this scarce medicine. So, we’re seeing that already.
I think it can also help in education and research. A generative AI called GPT-3 invented a new antibiotic that would have taken years for humans to do it. So, we’re seeing real breakthroughs in medicine already.
On the peril side, it’s just listening to the news. I mean, the world is very much on edge in these races of artificial intelligence and AI-guided robotic machines. This is stuff that science fiction anticipated for decades, and it is quite frankly now here. I’m on the Navy’s advisory board of the Secretary of the Navy. And we’re in a naval war right now to keep the sea lanes open. Drones after drones, waves of drones are flying at our ships, and we’re shooting them down. And Senator Kramer, who’s on the _____ board, we’re shooting down $15,000 drones with 15-million-dollar missiles. That can’t go on. So, there’s a big initiative now on built counter-drone technology, even here in North Dakota. That’s the perilous side. And then the social media effects on children. Fake identities on the internet are undermining our children’s mental health. That’s a big issue, and we’ve seen lawsuits now.
The tribes of North Dakota just brought a lawsuit against Facebook last week on the effects of social media. Those are a couple of thoughts.
Michael: And if you’re talking about balancing, Chancellor, if you’re talking about balancing the good and the bad, the negative side, the deleterious effects, they need to be broadcast. That needs to be known. That needs to be recognized. So, there’s a higher responsibility to bring the negative side effects to light.
Michael: I think you’re onto something. And as part of your conference, which by all rights was an incredible conference, and I believe the first in North Dakota to center on artificial intelligence, we need to go to a commercial break so we can get to this on the other side of the break. But what I want to know is, are we moving in a direction of higher communication for taking responsibility for the negative and deleterious effects?
Michael: So, this is Dakota Mornings. I am your host, Michael Bell, KFYR 550 AM, 99.7 FM. We have Danita Bye and Chancellor Mark Hagerott in the studio talking about artificial intelligence. Danita Bye and Chancellor Mark Hagerott.
And Danita, we were talking about, you and I were talking offline some time ago about whether we’re doing enough. And when I say we, I mean you and I and the Chancellor and any of us, are all of us doing enough to highlight the negative effects of AI. Well, in preparation, I had, well, actually not in preparation, in celebration of being co-hosting with you, Michael.
Danita: I sent a note to my family about my co-hosting this show. My 36-year-old son-in-law currently works as a chemical engineer with 3M, and he talked about some of the things that they’re using AI for. He said his greatest fear is that this technology is going to radically change jobs and that it is important for us to be able to let people know their jobs are no longer going to be there. And whatever jobs those are, whatever industries they are, we have a responsibility to let them know so that there can be retraining, and they can refocus. And I don’t know, I just thought that was a fascinating response.
Michael: Yeah, I think it is. And I am, without causing a panic, that AI today is going to put everybody out of business. That’s really not the case. That hasn’t been the case, you know, with industry. Capitalism is a self-cleaning exercise. But there is an awareness level that needs to be had on, you know, the danger zones. And I appreciate you highlighting that.
And I want to talk for a second. You have made a life out of the last 25 years of studying millennials. And this really has taken off. Artificial intelligence, its usage, and interaction with artificial intelligence has taken off with millennials. What’s your experience with that? Having, you know, just come out of, you know, seven years ago having published a book directly directed toward millennials with proven strategies for building your next-gen leader.
Danita: We have another guest joining us today from the University of Mary. I initially met him because he invited me into a couple of his classrooms to talk about Millennials Matter and some of the proven leadership strategies. So, I’d like to welcome Karel onto the show.
Karel Novak, University of Mary, ND
Karel: Well, thank you, Danita. It’s a pleasure to be here. Hello Michael.
Michael: Karel, good to hear from you again.
Danita: Karel and Mark were involved in this inaugural North Dakota AI conference about a month ago. And, Karel, the question I have for you is what some of the things were that were interesting or popped out to you at that conference.
Karel, well, I think it shows it’s essential for us to make sure that we’re understanding the risks and vulnerabilities that exist with AI in all the industries. As Michael said, it’s certainly not going to do a mass replacement, but we certainly have to understand what we’re going to encounter as far as what we need from a skill set. And I think that we have to teach our students to make sure that they’re being relevant, you know, positively embed AI into what they’re doing and make sure that it’s reliable so that they can confidently make decisions and then be responsible, not only use it wisely, but to use it ethically, addressing such things as privacy and data protection and bias and all those things that are wrapped up.
And so, a lot of that was presented at that conference, and, you know, the whole thing was, how do we be human? How do we make sure humanity remains embedded? It’s the old adage of garbage in, garbage out. The worst stuff that we put into AI, the algorithms won’t be able to process that as quality information, and it will come out. So, we really wanted to teach individuals how to be ethically responsible with artificial intelligence.
Danita: That is so true, and what I’m seeing is that there are threads of leaders that are starting to address that ethical arena. And so, I’m curious, Mark, what are some of the voices that you’re hearing or threads of thought that you’re hearing that address this ethical piece?
Mark: Well, thank you for raising that question because with all these machines, right, we had to decide, you know, what were the appropriate uses of these new technologies. Well, I just want to compliment obviously the State Board of Higher Education had its Envision 2035 looking out 10 years in AI was the number one disruptive driver and the State Board identified that a year ago. So, it wasn’t just jumping on the bandwagon, and we had studies of workforce, student learning, cultural values, you know, and AI was identified. So, we’ve been working on that.
The legislature convened a study group on AI way back in 23 led by Representative Bosch and Senator Davidson and Representative Josh Christie really has taken this topic with the governor. The governor has established a study group that, in fact, just met on Friday about how we have trustworthy, accessible, and affordable AI in North Dakota because that’s the key thing. We need to have this available for North Dakotans, but we need to make sure that it’s responsive to North Dakota values and not some massive machine, you know, pumping the data out of China, let’s say, because they become the dominant AI power.
Lots of people working on this and I think the Europeans have already got an AI act which actually could be a little problematic because I think it limits innovation because, I mean, the reality is, you know, if we didn’t invent the internal combustion engine, if we didn’t build the bomb, our enemies would have built the bomb and the world would have been different. We are in this race, so to speak, and we want it to be basically a net positive. So, voices are beginning to come to the ethical questions, and I’m just really proud of how North Dakota is tackling this, which doesn’t surprise us.
North Dakota is very unique. We have our own state bank, our own state mill. We just don’t follow the crowd, which is one of the amazing things about this country. Those are a few thoughts.
Michael: I appreciate you bringing that to the fore and that, you know, North Dakota is unique. North Dakota is a unique mindset and that is that I’ve been able to ascertain, that I’ve been able to see, and that is that we welcome innovation, we welcome technology, we welcome revolutions, if you will, but we also are cognizant of the downsides and one of the things that we talked about, that you’re talking about this morning, is highlighting there is a downside to this as there is with all advancements. There is a downside to this, and it needs to be acknowledged and then worked with.
Mark: Absolutely, and that’s where I think, you know, North Dakota is so positioned because as it was proven, you know, scientists for years did not think information was connected to energy. Remember, Einstein said the world’s made out of energy and matter, and then people came along going, well, what about information? And guys like Maxwell, Claude Shannon, and then Norbert Wiener, they said information is information. It’s not energy, and it’s not matter, but it takes energy to create it.
And what does North Dakota have? We have a blessing of enormous energy resources, and this is important; it’s cold here. Now, your listeners are like, what is he talking about? These new AI chips, Nvidia, generate heat they generate is off the charts because they are so energy dense. We had a meeting, you know, trying to strategize on the future of the AI, what we call the value-added energy approach. And they said in North Dakota, they can use just ambient temperatures for six months out of the year to cool these data centers. Whereas in other parts of the country, they have to bring enormous energy in to cool these computers.
So, North Dakota really should have a strategy of value-added energy and the new governors coming in the new legislature. I think that should be one of their touchstone strategies is we become the powerhouse for these massive machines to help power the free world. Our NATO allies, candidates on the border here, but then also we have to have a certain portion for North Dakotans that represent our values and helps our businesses flourish here in North Dakota
And we could spend the whole day talking about that strategy, but that’s what the governor’s working group is on right now.
Well, I would love to spend all day talking about that strategy because it’s something that is very, very important, and I appreciate you bringing that to the fore.
Michael: Danita, you had a line of questioning going on there that I really liked, and I really appreciated talking about this conference. Why don’t you continue on that? We’ve got a couple of minutes, about a minute here till break.
Danita: Well, I’m not certain if this was addressed at the conference because I was online and only got some of the speakers, but this is for Karel. You and I had talked briefly about the importance of helping our students to use AI wisely and ethically because that’s what the business expectation is.
Karel: Yeah, you’re absolutely right. Businesses are going to have an expectation that our students come out fully prepared, just like any other skill set that they might have. And so, to me, it’s a relevant tool.
I always tell my students, you wouldn’t probably use a shovel to pound in a nail, but if you’ve got the nail up against the wall and the shovel is right there, you might use it, but there’s going to come consequences to not utilizing the tools that you have readily available to you the most appropriate way. And so, if our students are going to expect knowing this when they get out into the business world or even prior to that for internships and such, then it’s our responsibility as faculty members to make sure that they’re utilizing that and incorporating it most ethically. If we don’t teach them how to properly utilize that, then there will be consequences. They will run into issues. There could be lawsuits that are started because of information that wasn’t properly protected or data that wasn’t properly protected. So, it’s really, really important to make sure that we have those tools being taught the right way.
Michael: Karel, that’s a really good point and I appreciate you making it. We need to go to a commercial break. I want to unpack that a little bit more on the direct impacts.
Michael: This is Dakota Mornings. I am your host, Michael Bell, KFYR 550 AM, 99.7 FM. We have Danita By, guest co-host for this morning’s show. Danita Bye, our leadership expert here at Dakota Mornings, as well as Mark Hagerott, Chancellor of the NDUS system and Karel Sovak from the University of Mary.
I won’t get into too much of the background because I want to continue the conversation we were just having about the upsides, downsides, and how we impact our future leaders.
Danita: So, thank you, Michael. To continue the conversation, Karel, I’m curious about some of the best practices for teaching and ethical use of AI. And our business leaders are expecting that students who are graduating are not only going to be competent at AI but are going to know the ethical use of AI. So, I’m curious when you think about that and say that what are some best practices or what are some of the things that go through your mind?
Well, I think first and foremost, the students need to understand again that this is just an additional resource that we have available to them, just like anything else, and certainly has a lot more power attached to it. But I think from our perspective, and we’re very blessed to be able to do that, is we help them understand that there’s a created order, and that means that there was a creator. And so, in understanding that, they have to make sure that they’re ordering their work towards the greatest good or the common good in our vernacular, is we just want to make sure that they’re adhering to those that need it in society.
So, I’m looking to collect a lot of this information from employers. What is your AI policy? What are your expectations for our students? And the more information I can gather from them, the better we can set them up for success. Our students set them up for success even before they graduate. And I think that’s the most important thing is find out there’s not one size fits all businesses. We want to find out what each business is looking for, put that collection together, that aggregate together, and then teach our students, make sure that you’re appropriately utilizing this.
Well, I love that. One of the reasons that I appreciate working with Mark is because of his emphasis on values and making certain that we are working in line with what we call North Dakota values.
Danita: So, Chancellor, I’m curious as you hear what Karel is saying, how that resonates with what you’re seeing as a priority.
Mark: Well, I think it may be one of the most fundamental challenges for parents, listening, grandparents, teachers, is that this technology is going to create an enormous amount of wealth. I mean, it’s going to be on par with the industrial revolution. But we want to avoid the worst of these events because we had industrialization. People in the coal mines in England had their lungs damaged, right? Children worked in the mills until we had laws against child labor. Now, we have a challenge.
Will AI be additive to the flourishing of our youth or extractive? And I’m very alert to this. We have a public trust of billions of dollars in the university system, but a sacred trust that they’re giving us thousands of young people. And I’m proud North Dakota legislators and voters stopped sports gambling. The data is coming out that you can combine AI with sports gambling. In Kansas, the bankruptcy rate for young men is up 30% in five years. We have to think about where AI could be extractive and damaging.
Social media, there’s clearly after the movie Social Network, it can be addictive to young people. Facebook had child psychologists trying to addict children. North Dakota has been alert to these types of things in our legislation, I think, going forward. You know, we can’t weigh in on constitutional measures, but I hope the voters are thinking very carefully about legalizing drugs that ask, does it edify the young people? Does it help workers work more and learn better? Are they healthier? If not, we should be alert that this is an extractive technology. The key…
2nd Hour
Michael: This is Dakota Mornings. I am your host, Michael Bell, KFYR 550 AM, 99.7 FM. We have Danita Bye, Chancellor Mark Hagerott, and Karel Sovak from the University of Mary in the studio talking about artificial intelligence.
Danita, you have a great line of questioning that I really appreciate, and I wanted to give you an opportunity to talk through that.
Danita: Karel was able to stay on for a couple more minutes, so I’m going to lead off with a question for him. He mentioned the ethical responsibility that we have, and there were three areas that he mentioned. I got two of those: privacy and data protection. Karel, what was the third item that you mentioned? And there actually may be a couple more items in there, but I wanted to make certain, so I jotted these down for myself.
Karel: The algorithm. The algorithm that’s used and the bias that goes into those, into AI itself. The garbage in equals garbage out method still applies. We want to make sure that people understand the things that they put in, and this goes back to proprietary information. It goes into the algorithm and then gets jumbled up into a salad in there and comes spinning back out. You must be very ethically conscious when you’re putting that information in.
And then, of course, when you’re using that information, you want to make sure that you’re using it in a reliable manner. That’s why I say we can’t really take the human element out of this because there’s always going to be that analysis of, is this information I’m getting worth me utilizing or using in a particular report or session or whatever it might be.
Again, we just want to educate the individuals on how to properly utilize that in an ethical and respectful manner.
And I think that that’s our biggest challenge from a higher ed standpoint. It’s just making sure that the younger generation, they’re growing up with this artificial intelligence. My 13-year-old granddaughter can ask Alexa, hey, what’s, you know, 365 times 27, and she’ll get an answer immediately. So, you know, they’re utilizing this daily.
And so, like I said, the business owners are going to have to kind of come up with a plan of how they want to utilize this. There is no one-size-fits-all all. But having some type of policy would be very, very helpful. Gathering the policies that are in place right now, maybe we can formulate something that’s workable for all. It’s helpful.
Danita: I’m curious from a practical perspective in the classroom. And you know what, and I recognize that there’s formal classrooms and informal classrooms because we’re always about mentoring and coaching and developing leaders. Karel, what are some best practices that you’re seeing about how to help our next generation leaders work wisely and ethically?
Karel: I think the most important thing is I call them effective practices. They might not be the best, but they’re effective. And I think that what we resorted to is what is the most effective out there. And if you can be effective and efficient at the same time, hopefully, you’re doing that in an ethical manner, and it’s providing what you need.
But I think it all goes back to this question: What are we teaching? How are we teaching them to use it? For example, I told my students to go to AI, put in the measure number four, get the pros and cons, and then conduct the analysis. So, I’m forcing them to go use chat GPT or whatever source they want to use and say, give me the pros and cons and tell me if this would be good or bad for North Dakota and then let them make that analysis from that information that they’re receiving.
I’m sad to say it’s scary to say that we were split 50-50 with a number of students who came back and said, Yeah, I would vote in favor of four, and I would vote against four. Based on the information that’s out there and we have to say, okay, well, where are these young people getting their information?
Again, it goes back to it. What are we feeding into this? And when you feed the exact measure of four into the algorithm, the algorithm is going to spit out, here’s the pros, here’s the cons. And then they’re still going to have to make that decision. What do I do with this information? Because there’s still a bias, even with the pros and the cons.
Danita: Mark, I’m curious, from your vantage point, what are you seeing some practical things that we might do, either from a formal or informal leadership, that’s going to help highlight the importance of paying attention to the ethical aspect of A.I.?
Mark: Well, I just want to thank, again, the State Board of Higher Ed and then the Governor’s working group, because we’re working on these issues of how to make sure it’s a net positive, a net positive for our students, that helps them be better students to learn more about the world and it’s not extractive. And then second, with businesses, same way, that this is a net boon to North Dakota’s economy and businesses, business creation and not extractive.
Now, these approaches are well-documented in history. I know people realize that when the Internet was first created, North Dakota was the first state in the nation that went to the U.S. Supreme Court to say it is unfair to have online companies on the coast extract revenue from our state without paying sales tax. Then Attorney General Heitkamp went against Quill, and the Supreme Court dismissed her case in error because then, 25 years later, South Dakota went back, South Dakota versus Wayfair, saying this is extractive unfair, and they ruled they were right, Amazon needs to pay taxes. North Dakota has a history of being alert to our things, helping our businesses, helping our students, and helping our people. Is it fair, or is it not?
When it comes to AI, what we need to be alert to, building on what Karel was saying about privacy and our students, is that these algorithms are helping them learn, or is it extracting their time into mindless tasks? I can tell you right now in the class I’m teaching with Dr. Cooley, the students are wonderful, our future’s in good hands with these students. But guess what? They all agreed to write in-class essays. I’ll say that again: in-class essays on paper with a pen or a pencil, because they have seen some of the freshmen are overly relying on AI to write coding, etc., when their foundational knowledge should be built on. And that’s an extractive business application from companies on the coast to do the programming for the students, they pay the fees. And I’ll tell you, the fees are not cheap. This is where we’ve got to help North Dakota businesses. I used AI to help summarize a couple of PDF documents, so it was very helpful. And on the third attempt, it said, Now you must pay $5 a month. Well, we have 50,000 students, faculty, and staff. And if we pay $5 a month for that function for PDF, that’s $3 million that will go out of our state, right? We have to be aware of areas that are helping us and are not taking too much money out of our state. Then we can get into the privacy stuff that Karel talked about, of monitoring and having new laws and rules.
North Dakota, just to share with you, was early on the privacy issues. George Kaiser, a state senator, is now gone. He took a bill to North Dakota that when you’re in your car talking, the car cannot be recording your conversation with you and your daughter. And the lobbyists killed that bill. Well, guess what? Now, all these new cars with AI are listening to conversations in which they have no business without consent. And North Dakota was on the front line. We didn’t carry that one. Maybe there’ll be a new bill coming out of Dakota this year. But we want to take care of people and take care of our businesses here in North Dakota with the age of AI.
Michael: We are joined by Danita, by guest host, co-guest host for today. Chancellor Hagerott, thank you so much for spending as much time as you have this morning with Dakota Mornings.
Mark: Well, this is a massive topic. And one of their speakers at our AI conference, a federal judge, retired, said, I’m in North Dakota because you can’t give this issue to the two coasts. You need to have a voice. Thank you for holding this topic.
Michael: This is great. Yeah. And, you know, this has launched us into, you know, I think what’s going to be a regular segment, especially post-election. I appreciate that North Dakota is grappling with this at a very early stage. Absolutely. Again, our legislators and the governors have rolled up their sleeves and are working on it. I mean, I would argue as a historian of technology that this is the most important event in the history of this planet. And I’ll back it up. We’ve had recessions before. We’ve had wars before. World War I, World War II. We’ve had pandemics: the 1918 pandemic and the 2020 pandemic. We’ve had all sorts of chaos socially. We have never had the emergence of intelligent machines. And that should excite people because we’re here now when this happens. Well, we’re alive, but it puts a huge responsibility on our leaders and the voters to think about this time. And how you advise your children.
Michael: This is Dakota Mornings. I am your host, Michael Bell, KFYR 550 AM, 99.7 FM. We have had Danita Bye, Chancellor Mark Hagerott, and Karel Sovak from the University of Mary in the studio talking about artificial intelligence.
Let’s discuss a tailor-made interview to meet your audience’s needs.
Virtual speaking event? No problem!
Check out my Speaker page HERE.
To schedule, a call contact me at danita@danitabye.com
No Comments