Another Cloud Podcast

A podcast designed to bring you stories from the smartest minds in IT, operations and business, and learn how they're using Cloud Technology to improve business and the customer experience.

Eye Opening… AI is taking over

with Alex McBratney and Aarde Cosseboom

Don't have time to listen? Read the full transcription.

SPEAKERS

Aarde, Tyson, Alex

Alex 00:00

Hello, and welcome to another cloud podcast, a podcast designed to bring you stories from the smartest minds in it, operations and business and learn how they're using cloud technology to improve business and customer experience. All right, well, welcome to another episode of another cloud podcast. And today, we're really excited. We've got Tyson McDowell, he is the managing partner over at Great Scale Ventures. Not only that he is an aviator pilot. He did not fly in today, unfortunately, but we haven't remotely and he is also a TEDx speaker. And today, we're really excited to talk about AI. But first off, Tyson, welcome to the show.

Tyson 00:42

Hey, thanks a lot, guys. It's really great to be on the show. I've been looking forward to it ever since our lunch and thank you for lunch. It was tasty.

Alex 00:51

Always good, right? Can't go wrong with you since Of course, I've got my co pilot here. Aarde Cosseboom. How you doing, buddy?

Aarde 00:59

Hey, Alex. I love the the the pilots analogies. I love that hopefully. Your way man anytime.

Alex 01:07

The wings. Yeah. Alright, goose. So we're gonna go with new names. Oh, you'll be goose I'll be mad. So Tyson, for those that don't know, you. Give us a quick background, just you know, how you got to great scale, what you're, you know, in into AI, just a quick career path to where you're at today and kind of gives us a good overview of how you got into AI? Yeah.

 

Tyson 01:34

Well, I'm a serial tech entrepreneur at a company that started back in 2000, just out of high school that made hospital collections more efficient, and better for the patients ultimately, improving patient experience while causing money to flow a little bit better. And that that was a big contact center problem. Data problem. And we ended up implementing a lot of AI techniques along the way that we didn't really recognize as such, because it wasn't kind of a thing that people talked about yet. It's all throughout the 2000s 2000 10s. And we sold that business in 2016, to a growth private equity firm. And, you know, we were proud of the impact that it had it really allowed conversations to occur with patients that were in a lot of emotional trauma around, you know, whatever the health issue was plus the huge financial fear. And then it's very confusing and health care. So big integration problem, and how do you how do you sum all that up into a three minute call that that causes someone enough comfort, to make a huge financial choice, that's many cars worth of money. And we pulled that off. And we demonstrated a lot of ability to let technical solutions bring empathy to those conversations, and, and financial health to the health systems that we serve. So did that I wrote the software, a lot of the software and then built a team and ran the company for a while. All during that time, you know, is in healthcare, finance and healthcare reform and tech and the smartphone came out and bandwidth happened. And Yep, ai happened. And Tesla showed up. And I mean, we didn't go back to the moon, but we certainly just it was we made the matrix. And that was awesome. I was super excited. So I decided if I'm going to do something new in the world, it better be to build a better internet that I want to live in and my daughter to live in. which brought us to great scale ventures, we found tech companies that do that. And we bring the software to the table and the business models and all the funding so that we can make good on this beautiful future that that we're supposed to have for ourselves.

 

Aarde 03:53

Yeah, for all of you listening, who have not watched the TED Talk, click on the link below. I'm sure we'll put it below the sound guy scripting like

 

Alex 04:03

Click in the description below.

 

Aarde 04:04

Exactly. It's in you, Tyson, you touch on so many great topics there. And of course, the you know, Ted style is, is a storytelling style and you eat you still tell it so perfectly. Talk to us a little bit about, you know, AI, the good and the bad. And and tell us a little bit about what your viewpoint is on AI machine learning, big data because our audience is really intrigued about how to use AI how to use machine learning how to use all these tools to help support their customers.

 

04:40

Yeah, absolutely. Well, a couple of things. Firstly, the technologists among us and the people that buy and manage technology, have an opinion of what AI is and their opinion doesn't matter. AI is AI is anything that we humans, outsource all of our thinking to And some of those systems are very dumb. Some of them are brilliant, technically. And I think that there's a big pollution of the conversation because the technologists are so busy debating what AI is, what's the line between logic and AI? What's the line between, you know, as forward chaining inference technique is that AI know, used to be honestly, like, way back in the 80s, when we're doing credit card fraud detection, that was AI, you know, neural nets became the thing and then all if it didn't have n layers, and it wasn't explainable, it wasn't a neural it was, that's a whole bunch of like, okay, sure, we can talk about that. That's not what we're talking about. That's not what my TED talks about. And that's not what you know, 7 billion people on this earth care about. What we care about is, is this technology actually smart enough to have our best interests at heart, and to be smarter than we are about suggesting what aligns with those best interests? Great, practical examples in the business world, you know, I'm going to filter resumes, ai filtering resumes, because there are 300,000, applicants whittle it down to 1000. And we all know about diversity issues and explainability issues, but son of a gun, you're going to impact 100,000 people's career and entire economies with a model that you don't even know how the darn thing was trained. Is it really that smart? So AI is the really the story of AI is to me that it's a it's a child, it's an it's an it's adolescent state. And it's kind of a whole colony of children. It's kind of like the Lord of the Flies.

 

Alex 06:47

It's not good.

 

06:48

Yeah. And, and, and there's no, that's exactly what's going on. And, but cooler heads are prevailing, and we have an opportunity to change that for ourselves. And then those of us that influence its creation and deployment have an opportunity to change it for everybody.

 

Aarde 07:10

Yeah, that's a that's a really good example. I'll let Alex ask a question in his next question. But to drill into that a little bit more, I read an article not too long ago, about a month ago about LinkedIn, AI and resumes, he brought a good point up there, it was funny because it talked about people who had unique names, like myself, spelled AR D, if I refer myself to myself in the third person, anywhere in my application or my resume, other than just in the Name field, it would think that it's a misspelling, and a misspelling, on a on a resume or an application, Docs, my points and then the AI kind of flags that as you know, not good, this person can't spell or has bad grammar, things like that. So just a great example of, you know, these ecosystems that are kind of hodgepodge together and could be designed for good, but actually produce that 100%. And that's the nature of learning and iteration and agility.

 

08:11

So it's, no one's intends to be bad, very few people intend to be bad. The issue is, for some reason, we're in this place of believing that AI is someone else's brilliant invention. And so we sort of suspend belief. And, and, and let it run amok a bit. And people tend to be afraid to call bs on it when it's wrong, because oftentimes, they don't know how it's working, or they rented the algorithm, or the algorithm came from a college or university, or, you know, just all kinds of stuff, you know, we don't know how FIFO actually behaves and why. And that's a machine learned based application of AI is very old. People don't really question it. And, and you have all these new and novel rapidly developing applications. The main thing is everybody is smarter than the AI when something doesn't smell, right, fix it, bring it up, fix it.

 

Alex 09:18

You mentioned, you know, that it isn't as infancy and but it's growing extremely fast, and everything that everyone's doing, and the you know, all these tech companies that are trying to disrupt all these different industries, where do you see the, you know, regulations coming into play? Like are is there going to be a regulating body that's going to be able to look at these algorithms, and, you know, make sure that it is doing good, or is that even on the table Right now.

 

09:45

That's already here, and it's Europe, and it's, I'm so impressed with Europe, and I am so unimpressed with America. But the good thing is it doesn't matter. So a good example, when GDPR came in That's actually an AI control regulation, technically, global data protection rights stipulate the right to be forgotten. The right to delete data, you know, that's inaccurate or old or whatever. And it also modified the authority of opt in so that someone couldn't be passively opted into a new data dimension that AI can chew on. So that actually comes across as you can't sell my data unless I let you. But like, for instance, Facebook is some example I don't know if it's a literal example, might suddenly decide that you are in cohort 516, politically, whatever that is. And that in your data set in your data settings, a whole new setting comes up with a checkbox pre checked, that you're willing to share your political cohort with Facebook for the purpose of advertising, whatever the heck else they do with it. And that was pre checked, while after GDPR, that's not allowed. And you have to go in and manually check it. That was huge. So beneficial, Facebook conversion rates fell through the floor, all this passive extra data that people had a mask on themselves and unknowingly shared, got removed, and therefore the the AI for ad targeting got weaker. And that should have what that puts pressure on all of a sudden, is that Facebook now has to go and say, I've come up with a new thing. And I've tagged you this way. Are you okay? If I share that with somebody? And that's absolutely right, it puts pressure on Facebook to tell you then, then they have to convince you what's in it for you? Is it better ads, is that good enough for you? That's that now it's up to you. And it puts more pressure on the PR of it, to actually think about what the user wants, because they got a lot, you know, allow next generation data privacy, regulations coming out of Europe coming up. Absolutely related to transparency to AI manipulation. So information, if it's been pre sorted and prioritized, it actually has to tell you that when you see it, and information that was written via automation needs to be disclosed as such to the end user. Another very, very good check in balance. So that natural course of AI, what we do in the US and all this animate this monopoly crap that we can go into it if you want to have that conversation, but that's all useless. totally useless. Not Do you want to do anything?

 

Alex 12:35

What are what are some exerted examples of that on the you know, as far as like being notified or whatever is served to you is is part of, you know, your algorithm or that you're just mentioning.

 

12:51

Yeah, so there was some us regulation pre GDPR, that that forced the hand that with like, if you're on a social media wall, and you see sponsored content, the fact that it says that it's sponsored content, and the UI has to have differentiation is one of the first examples of an implementation of that. And that, in fact, I'm not 100% sure to go check my research sources, but pretty sure that was regulatory pressure that causes that user experience to shift. So it looks like that. It's just that instead of saying sponsored content, it might say that this content was augmented with automation, or this headline was written through automation, or there are 19 different versions of this headline. It it's unclear what all those regulations are going to be yet and but it is clear that they iterate in there. And in Europe, they're iterating in that direction. And the cool thing about tech companies is they they're global by accident. They literally it's kind of like when Tick Tock was Trump was trying to get Tick Tock shut down in the US. It's hard to shut down in the US. It's like either comply with the US and the whole system, or block the US. And so you can't block here and be a good business online. So you have to comply with the least common denominator regulation, which is great, because it also relieves the us from having to be the police in this factor, because that's not politically popular,

 

Alex 14:22

huh?

 

Aarde 14:24

Yeah, I remember scrolling through Instagram. I think it was a year ago, when the first time I saw sponsored content, make the head little note that's a sponsor. But what I realized that this was actually yesterday, my wife was going through, she saw some sponsored content. And she on Instagram, she clicked the three little buttons in the upper right of that content. And she could actually report it, block it or save this, this content is no longer relevant to me. Yeah. Which, to me was like a new feature. And I was like, wow, this is it's evolved. You know, it's evolved beyond just them declaring that it's sponsored. Now they're going to the next level and the next level, do you think, um, you know, future thought with regards to AI and marketing and sponsored content that's just being bombarded to us over all sorts of different channels, whether it's audio visual, or through social? Do you Where do you see the future? on the consumer side? Do you think we're gonna have the ability to kind of tell the marketers, this is what I'm interested in? Don't show me anything else. Like, I only like, you know, barbecues and you know, whatever the things that are of interest to myself, um, do you think there's going to be some kind of user defined ability for the consumer to choose? I think, yes, in certain applications, and those are the ones that I would prefer to use this, all this AI works great if to go get a result. And if the result is defined by the end user, and it's what the user really wants, let the AI obsess about that users getting success in that area. Right now, the way that it works is it guesses about who you are. So it already knows that it's assigning you in intent, it's just that the intent is inferred from your historical patterns. Who else you know, cohorts you're in. And then that tends to cause regression and groups, especially with social media. So on the consumer side, like people talk to each other, and they do so in like ways. And you can say, That's how fake news spreads. And that's how candidates get elected these days, candidates that winning election and understand one thing, they understand that there are 13,412, specific optimized cohorts of differing opinion on a topic. That's, that's literally kind of how these algorithms look. But you could roll that up into seven categories. There are seven messages, all of which are inconsistent, some of which are polar opposites. I, as a candidate can tell, I know that I'm talking to cohort seven, and I'm going to say that the sky is blue, I'm going to talk to cohort nine, or one and I'm going to say the sky is green. The two will rant amp each other up in excitement and agreement. And occasionally, someone will come in and say, No, the sky is green, you people not blue. And those in that universe, they get flamed out of existence. The information or the even the idea of this guy is green is one in 100, across all of the whatever's and the exact same opposite world is true in Cohort One. So that that's the after effect of systems that don't ask you what you want. They infer what you want. And then people like Mark Zuckerberg say, clearly, people like content that's abusive, clearly, people like saying negative things about each other. Of course, they like going on and only have seven second content snippets. And of course, they like Instagram Live, they don't quote, like it, they've been observed to go there and then driven to go there more, and therefore the data looks like they like it. So I think that will be in the future, they're going to be different systems, serving a more intentional advertiser in the consumer space, in the tech space, like in the enterprise space, this all matters in a lot of ways. Because ultimately, what's governing enterprise, and just all this stuff, even faster than regulation is liability and liability insurance for the company. So you know, a lot of this AI stuff, you know, we can talk about it in consumer land, and with politics and regulation, and it's a little bit hearsay, in enterprise land. You You can install a system that takes you from HR compliant to not, you can install a system that takes you from GDPR compliant to not and you can take you can do it you can install a system and we're not you know, you can you might be totally sewed up around hacking and security and stuff like that, and introduce a feature that causes someone to be excluded. And then once you have a violation of that you can't get DNO insurance. So the end, the insurance companies know this, the board members know this, the risk committees and boards of public companies know this, and they're the ones putting real pressure on enterprise. It's like, Huh, you can play with the sizzle, but you better be sure. And that's why you get these innovation departments willing to do a lot but then it's tougher to get to go deeper. And for some of the people who are listening to this podcast, they run sales and marketing Customer Service teams and whether they know it or not, they're potentially using AI, whether it's Facebook ads or some something, you know, having to do with contacting or communicating with their customers, and then more outbound fashion. So if you were, if you're in the, like, decision making room with some of the people who are potentially listening, like a SVP of marketing or customer service, but what would be some of the advice that you would give to them, you know, lean into this, or maybe really hone in and what you should be using this AI for? Tell us a little bit about your, you know, how you advise or consult people who are dipping their toes into it?

 

20:47

Yeah, well, I kind of start with what are the what are the headlines that I equate to hype? What are the headlines, that can frustrate your ability to make good choices, and one of them is that AI is powered by big data, send big data out and get great answers back. And in the in almost all enterprises, unless you're doing something truly big data related, like predicting earthquakes or the weather, you're you have a business process. And you want to use AI to enhance the cost value proposition and the customer value proposition. You're in a business that is doing a very tight domain activity. Whether it's legal document processing, or whether it's supply chain audit, or whether it's collecting on car loans, or whatever, you have a process that you can understand and study and AI can make that process more efficient. And the moment you try you realize it's dumb, because you have smart people that understand your business. So listen to those people, let them iterate, that let them control the AI, it's it's partnering up AI, his job is to take the best of knowledge, and just multiply it in the enterprise case, in my opinion, almost all the time. So not forgetting, in fact, doubling down on the fact that you use iterative product management, invite all the stakeholders, gut check the output and results produce really accurate training data with very manually thoroughly audited processes, send 800 data samples to a machine learning thing, and come back and see if that model then agrees with your the quality assurance of your let's say that you're doing call center quality assurance coaching. Like Yeah, I can I can just buy an algorithm that gives my reps a call center score of one to five, how the heck did that thing get trained? That's fine. Start with that. But compare it to your best QA people doing a thorough QA process. And find out how right it is because it's wrong. And what you do with that new data that's compared as you send that to a machine learning service, and you get your own version of an algorithm, and iterate, iterate. So really the best advice is, ai doesn't is not going to teach you anything new and brilliant about your process. It's going to remember what you learned was new and brilliant about your process. And automated.

 

Alex 23:30

yes, it reminds me of a The Office episode, where Dwight schrute was up against the e commerce site at Dunder Mifflin. And it was a battle of if he can beat them or not. But it's kind of the same way. You take the best sales rep or your best agent, compete against the algorithm and see where then look at those results and see where it comes out? Like, is it truly giving you the answers you want. And I talk to a lot of it executives and we had someone on the podcast two days ago about we're talking about big data. And he was saying he did a course at Harvard, where only 5% of corporate data is even being used for you know, for learning, right for business intelligence. And then as he said, a big piece is just knowing what questions to ask, you can have all this data. But if the if you're not teaching this child, you know, algorithm, ask the right questions. You're gonna it's gonna be all over the board.

 

24:25

Yeah, another right on. And another framework that I think people can apply is when you have an AI model, it's a new employee performing a new role.

 

Alex 24:38

That's a good way to look at it,

 

24:39

Treat it the same exactly as that and it just so happens as a new employee has an unlimited amount nearly of productivity in them, which is dangerous when it's a really, really new employee performing a brand new role. So whenever I look at information systems that actually model teams, I have automation and Human beings are both classified as the same actor. And they are treated with the same level of requirements approved value, they are treated with the same requirements to do incrementally reducing quality assurance cycles. They're all measured by the same KPIs and indicators. They're all optimized as a cohesive team. In the enterprise setting, AI is a teammate. And it's got to earn its keep and earn your trust just like a human.

 

Aarde 25:31

Yeah, that's a really, really good call out we, we did a podcast A while ago, where someone was talking about the human centric outcomes. And they're not even dipping their toe in automation or AI today, but everything they do every meeting that they have, they make sure that they have that phrase up on the wall, you know, what is the human centric outcome? What are we going to do? or produce? What's the, what's the outcome going to be? What's the result? I love the idea that AI or whatever this tool set is that you're trying to implement, treat it as another employee pull up, you know, physical being out and call it out and say, what is the what is the purpose of this the Self Service bot or whatever it is, because if it doesn't have some sort of human centric outcome, it's either not going to be successful, or it's not going to be received very positively from your customers and members. So

 

Tyson 26:29

absolutely,

 

26:30

And who's going to supervise it? You know, we were if AI were really an employee would have a boss. And if it misbehaved, you'd know, by way of certain metrics, like customer success, or, or complaint rate, or efficiency level or, or misappropriation of resources. And whoever is responsible would say, yeah, it's misbehaving. So what's, what's the performance improvement plan, and go back and fix the thing? A lot of people just kind of let it do its thing, and they think it's working out of the box. And that's insane.

 

Alex 27:07

What are some examples that you've seen where people or companies have done it really well, and like in like the specific use case where they have trained this new employee and have taken it from an infant employee to just kicking ass?

 

27:23

One example. I don't know, a couple of years ago, but I met the self appointed ethics team at Amazon, for the HR hiring process. And they're big data engineers that took it upon themselves to try to whittle a large number of resumes down to a smaller number of resumes, and a specific outcome requirement for the AI algorithm was readable diversity. And that's a hard training set to do because it's like, for the resumes that are attached to prior successful people, this The reality is that older white men have a greater density in that data set of success. And you can blame society or you can blame it doesn't matter why that's true. So when you have a training set that's already naturally goods called biased, that's a problem. They were they brought a ton of engineering mind to the table and data science might engineer bias out of that. And as far as I understand it, I wasn't on that team. I just got the pleasure of talking to a couple of people who were passionately leading that they thought they got it down, they ran it, it didn't meet their tests, and they shut it down. Now, I don't know if it's been modified and turned back on in other ways. This was a one time interaction I had that that was a great example. thesis, apply. Learn, stop.

 

Alex 28:57

Yeah,

 

28:57

Again, you know, don't just set it and forget it. It's not Ron Popeil food dehydrator, we don't know if it's going to work. So that's a really good example. And then where there are successful going way back and this is I like talking about this just because it shows you that this is not new, you know, credit card fraud detection and intervention is a very AI centric thing. And that does it really well. You know, I'm gonna block your card. They reach out to you with human workflow. Was this you was this not you? buy sell block the card, I'm going to back that up with a real time response, do a verification or not, and and release the transaction going forward? That is a incredible application of AI. It's automatically corrected because it includes a human verification step before it takes super action, temporary block, business process, positive human identification verification. Am I right? Am I wrong, it's always every time am I right? Am I wrong? And it's in its optimizing the human cost of doing that verification and intervention process. So that's a great, that's a very mature AI system and human AI combination that I think people should study.

 

Aarde 30:19

Now, changing gears a little bit and talking more hypothetical or more of your viewpoint. Let's say the the way that we use AI as society doesn't really change, we're kind of on this path today and kind of continued on that path. Do you think it's going to be harder for a consumer or for you know, us as humans or as individuals, to be able to change our viewpoints? Because if we start liking football, things, football is just going to be in our feet, and we'll never be able to say, you know what, maybe I like soccer or softball or whatever that is, it's going to kind of be drowned out by what you like, and being surfaced you.

 

Tyson 31:02

Yeah.

 

31:04

It's, gonna get a little bit worse, and then it's going to get better no matter what we do, or how intentionally we are about it. I you know, look back in history and things always wobble around and, to, quote, a masterpiece of the movie Jurassic Park, you know, nature finds away I think that was when the frog DNA, or whatever made the dinosaurs that were all men. Someone came.

 

Alex 31:30

Yeah,

 

31:31

They Yeah. Anyway, that sounds

 

Alex 31:34

Just like Jeff Goldblum right there.

 

31:36

Yeah. It finds a way? Yeah, that's true. Now there is something that was really interesting. In our business, we were exposed to a report, an analyst company that talks about human attention. And up until fairly recently, in the last year or so, we had more attention to give we consumers. So anytime there was a new thing. It was there was more there was attention available to consume that thing. However, in the last few years, that's that is not true. We are information saturated. To the point where a competitor to peloton is Netflix. Right? You don't normally think of it that way. But it's absolutely true both time and attention competitor and a wallet competitor. And there's just not enough room. So now we say now everything is being sold to people with zero more time and attention. They might have money, but they don't have time and attention.

 

32:43

Yeah

 

32:43

So that means that for the businesses to succeed, they must get better at penetrating that and telling stories about why their thing is going to have a 10x payoff in time and attention and money to just to get just to get trialed, let alone bought. So it naturally moderates goods and services and social media, towards understanding the people. But it requires all of us to be so saturated and flaming each other and unopened minded and evil and angry that it's actually ultimately the advertisers self interest that pulls us back out of it. And amazing products that do amazing things and every and people begin to be hungry for that. But so if you just like don't intervene, it'll fix itself and it'll take 25 years and and there's going to be a generation that my daughter's in, which is the reason why she's doing that. Our daughter My wife, Erica and I's daughter, I can't say my daughter because it's Erica's pride and joy just as much and she has all of her craziness and stubbornness.

 

Alex 34:03

Oh, edit that part out. Don't worry.

 

Tyson 34:05

Oh, good.

 

34:06

I love them both. But you know, she's gonna be in this environment where we're maxed out on attention. We're abused, all this adolescence has been let free and everything's running amok. And yeah, it's gonna be hard to know what's an opinion, it's gonna be hard to tell if chasing that view count is more valuable than earning that dollar. Is there a payoff for going out and having an opinion and asking a vulnerable question, but all these people just tell me I'm stupid is that going to double her down on believing she's stupid and and have her resort to anxiety drugs. Like in that, that all sounds so dumb, but it actually happens a lot and it doesn't sound dumb anymore, which is scary to me. So we're gonna keep going down that path for a while. Until, until it hits the profits of the clickbait type methodologies and of the superficial stuff. When the insurance claims start coming through for people that have been violated by these systems, when the company's boards of directors and shareholders are facing real and severe financial liability for the actions and the results of these systems, then you end up with legal reform, then you end up with case law, prosecution, then you end up with insurance refactoring, then you end up with new software. And underneath the entrepreneurs, like those in our venture firm, are building for that new future where the product better be 10 times better and more valuable, gosh, darn it, or it's just not going to win. So when you kind of add in each of those, everything I said, there's like a five year cycle. Now, so there's like five of them. It's 25 years, there's, there's some thinking behind it?

 

Alex 36:01

Well, it's interesting how greed gets you in and greed is going to get them out. Right, get us all problems, but and you touched on it right with your, with your startups that you guys are working with. And this in we talked about before making the internet good again, and how you have a different approach to what you're chasing, right? It's not just the dollars, right? When like the Facebook's and the Amazons are, they're all just heavily invested in shareholder value. And they're going to do everything they can to squeeze that as much as they can out of it. Talk a little bit about just you know, your approach and how you're looking at these startups that you guys are funding and how you want to make that how they're going to make the internet better and kind of shorten that 25 year cycle?

 

36:43

Yeah, absolutely. Well, you know, the way we want to shorten the cycle is by showing that you can make money while doing good, and have objective financial results in both categories as soon as possible. So that's, that's the first thing that we're doing. You said it actually the, my partner was at Microsoft for a lot of years. He was Bill Gates, technical advisors part of that job at during that time. He's a philanthropist. He's an engineer. He's amazing Aaron contour, I love him very lucky to have him with me. The two of us went and engineered an ethical AI design and ultimately concluded that yes, that's helpful. But you have to fix the shareholder incentive problem. Tech grows from the United States primarily because of the free market, which is what's in a shareholder primacy state, meaning it's all about the shareholders and the shareholders making money. Not about the end users also making money and about the employees having a good middle class experience of being able to afford where they live and having good work life balance and long term quality of life and durability that comes from a 35 year old company, trying to be 100 year old company, these are two year old companies trying to become seven year old IPOs where user growth causes the valuation of the company to be representative of what the size of the company should be 10 years from now. So the shareholders are getting paid on selling the shareholder, the valuation to the public someday, and the public over pays for that valuation, because they can see the ridiculous growth rate, they can see the popularity and they will pay for what that company will be worth some other day, which incense everything towards rapid user growth. The easiest way to get users attention is clickbait and, and superficial addictive behavior and attract them to watch the live stream because we want their eyeballs all the time and that kind of stuff. That is the problem. And now those don't work in in in an attention saturated consumer environment like we just talked about. So it will correct itself. Great scale just says we have to be investors that are happy with exits at different valuations than super crazy ones. And we have to make sure that our companies use their capital effectively and efficiently that we solve real world problems we're very happy with are smaller, more effective and happier employee base and user base. Because the valuation is plenty for us investors, it's plenty for the founder. It's plenty for the employees option pools, and the product is producing financial and wellbeing value for the users. And we can be proud and happy and I can show that dollar for dollar we return just as much as Andreessen Horowitz. We didn't deploy as many dollars. It wasn't as flashy But Gosh, darn it, we're proud and we made just as much, but we just use way fewer dollars doing it. Because we bothered to build a real business of real value as opposed to a hype business of potential value. That's difference.

 

Aarde 40:15

I love that. And I, you just touched on so many great points, I know, we could talk for probably like six more hours straight. Just something I wanted to add to it, because you use the term users a lot here. And I just read a book called hooks. And in the book, it talks about, you know, attention grabbing, and it's actually helping people design products that can grab attention and grab users, and the negative effects that can come from that. And one of the things that he mentioned, and it was kind of a light bulb, when I read the line was, there's only a couple industries out there that refer to their customers as users. That's the drug industry, you know, they are if you smoke, odd, or if you do illegal drugs, you are a user. And it's also a lot of social media. So if you're using social media, you are a user, you're not a consumer or a customer. So it's great to hear that there's companies like yours, they're people like you who are thinking about converting the business processes so that you're not treating them as users and getting their attention span or their time or whatever that is, and you're focusing on giving them something that they actually want. And they can put down the tool or the system whenever they want, and they can go about their day to day lives with healthy work life balance. So you're doing an amazing job. We'd love to have you on the podcast, another two, four or five times I love this type of conversation. But Tyson thank you so much for for joining the podcast today. Alex, thank you so much for for hosting and bringing us all together.

 

Alex 41:59

Yeah, absolutely. It's been a blast and Tyson next maybe next time, we'll do a four one and we'll have Erica on as well talking about Dialpad again. Oh, yeah. That'd be that'd be a lot of fun. We can ask some really awkward questions. So the perfect

 

42:17

Alright everyone come back. We're gonna have spousal combat. It'll be excellent.

 

Alex 42:24

That's right. All right. Thanks, Tyson.

 

Aarde 42:26

Thanks everyone.

 

Alex 42:28

Well, that wraps up the show for today. Thanks for joining. And don't forget to join us next week as we bring another guest in to talk about the trends around cloud contact center and customer experience. Also, you can find us at Adler, advisors.com, LinkedIn, for your favorite podcast platform. We'll see you next week on another cloud podcast.