The power of the generalist

 

Gen AI

 

The power of the generalist  

March 13, 2024 | By Dean Sevin Yeltekin

In this Q&A, technology strategist Aditya Singh’08S (MBA) reflects on how generative AI is changing the world of work and predicts who will succeed in the new normal. 

Sevin Yeltekin: What brought you to Simon? 

Aditya Singh: I moved to the U.S. to complete a master’s degree in computer science and found myself at a crossroads. I could become a subject matter expert and lean into more technical work, or I could move into a business role and help clients leverage technology to achieve their goals. I found the business path more appealing, which is how I ended up at Simon to pursue an MBA. I naturally gravitated toward Simon’s data-driven, analytical approach because of my background. What I found was that Simon doesn’t just give you a skillset—it teaches you to think in a certain way. Being called upon to solve problems you know nothing about creates discomfort that leads to growth. That is how you thrive in future positions. 

SY: Now you’re a leader in technology strategy at Microsoft. What does your role entail?

AS: Within Microsoft’s Financial Services Industry sales team, my role is to help clients leverage our full technology suite to achieve their goals. I am a generalist, not a deep subject matter expert. My job is to put all the technology in front of the client like pieces of a Lego set and help them build something meaningful. 

SY: When did generative AI first come onto your radar, and how is it changing the nature of your work?

AS:I was introduced to the concept of AI in the early 2000s when I was doing my master’s degree. I was familiar with models that could be trained to make inferences, but everything changed when companies like Open AI began training Large Language Models (LLMs) with sweeping applications. Now we have a tool that doesn’t just scour transport data or climate data; it is trained on the entire universe of written language. At Microsoft, I have the privilege of trying new technology before it is commercialized, so there are plenty of generative AI tools that are still in development. But I can say that I currently use M365 Copilot, our AI assistant, to do things like produce meeting transcripts and set reminders. Generative AI is particularly useful for noticing things that happen on a schedule, like when someone sends an expense email every month around the same time. The AI tool might take notice and ask to automate it. If you say yes, that’s one more task off your plate for the day. This is just one example of the many repetitive tasks that AI will soon take over.

SY: How should we fill the time saved by using generative AI to do more routine tasks? 

AS: As time goes by, generative AI will leave humans more productive for things that matter. There will certainly be more opportunities for deep, strategic thinking. But that doesn’t necessarily have to take place in an office. We are all better when we have space to let our minds roam—whether that looks like thinking through a client’s problem on a long walk or spending more time with family. AI creates less pressure to do mundane tasks, but the answer is not to replace every mundane task with another task. That’s not good for people and their organizations in the long run. 

SY: How are your clients using generative AI to enhance their work?

AS: My clients in the financial services industry are using generative AI to tackle a broad range of problems. I often help them leverage our tools to improve employee productivity and reduce communication barriers on platforms like Outlook, Teams, and PowerPoint, where people spend most of their workday. My clients are also looking to embed the power of LLMs into very complex systems so they can in turn serve their own clients more effectively. A lot of this is not visible to the public—it is part of their secret sauce. Overall, I have noted an eagerness in my clients to understand how competitors are using generative AI. This desire to maintain a competitive edge is driving adoption and innovation in an industry that can be slow to evolve. Nothing happens with the snap of a finger, but as AI tools are gradually baked into day-to-day work, people notice colleagues trying them and decide to take the plunge themselves. Then, over time, the entire culture changes. 

SY: When it comes to generative AI, what kind of skills would you want a new hire to bring to the table? 

AS: Generative AI is changing the entire fabric of the global workforce. It will create new industries and require new skills in ways we can’t even predict. But what will stay the same is the kind of employee that companies like Microsoft want to hire. It is the generalist, the person who is trained in multiple areas and has multiple skillsets, who brings the most value to the table. Generalists will combine the power of AI with the weight of their varied, rich experiences to solve problems in a way that a machine never could. When it comes to being nimble and adaptive in solving out-of-the-box challenges, there is still no match for the human mind.

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

 

 

Generative AI as a data analyst


AI Data Analytics

 

Generative AI as a Data Analyst 

February 15, 2024 | By Dean Sevin Yeltekin

 

In this Q&A, business analyst Benedikt Statt ’16S (MS) reflects on the benefits and the limitations of using generative AI in a data analytics role. 

Sevin Yeltekin: What motivated you to pursue a Master’s in Marketing Analytics degree from Simon? What were some of the highlights of your experience?

Benedikt Statt: As an undergraduate student in Germany, I developed an interest in the people side of business. The desire to uncover what drives consumer behavior ultimately led me to marketing. I was working in a general marketing role at a small HR consulting firm in Mexico when I made the decision to pursue a one-year program to strengthen my skills in data analytics. At that time, I planned to continue building my career in consulting, so I got involved with Simon Vision. I remember learning early on about how to conduct and analyze surveys in market research class and then immediately putting that skill to use on a consulting project for a local company. Having the opportunity to apply theory and see very tangible results was impactful for me. I didn’t end up in consulting, but I owe my current job at Groupon to the experiences acquired through Simon Vision.

SY: Now you manage pricing and promotions for Groupon’s North American market. What does your role entail, and how do you use Generative AI in your daily work? 

BS: I work with engineering, design, and optimization teams around the globe to accomplish everything from monitoring daily promotion performance and testing website features to developing strategies to target customers with personalized messaging. One of the things I appreciate most about Groupon is that it’s the kind of place where we must prove every hypothesis we come up with, even when there is a consensus. At every step of the way, we use data analytics tools to drive and defend our decisions. A gut feeling isn’t enough. 

I primarily use Tableau, Google Sheets, and Sequel Data Systems in my daily work, but I do rely on generative AI to streamline things. I may use it to come up with ideas for improving Sequel queries or copy and paste an error to see where I went wrong in my analysis. By using generative AI to reduce manual work, I have more time to dig deeper into data and spot trends, which is often like finding a needle in a haystack. The drawback is that there is more thinking involved in deep analysis so I find myself more tired at the end of the day. Sometimes it's helpful to leave some more routine tasks on my plate to break up the more rigorous work. 

SY: To what extent can generative AI take on the role of a data analyst? 

BS: I can imagine some data engineering and maintenance roles disappearing shifting in the future, but there are limitations to what AI can accomplish. It would certainly make my life easier to feed in raw data and have generative AI tools complete all my analysis for me, but AI can't do everything I do, even if it looks like it can in a technical sense. There is a human touch that will always be missing. At Groupon, for example, someone might look at a user interface and come up with an idea to improve the customer experience. We can ask AI to implement the idea, but it takes a human to have the free thinking and creativity to come up with it in the first place. So much of what made my Simon experience meaningful was the gathering of creative minds. Intelligent people from diverse backgrounds combine their ideas to create something new. That is how you end up with places like Silicon Valley, where great ideas come from people, not robots.

SY: How should Simon integrate generative AI into the classroom experience today?

BS: Simon students graduate with a tremendous skillset, and the ability to leverage generative AI tools will be a welcome addition to the extent that they can apply them in real-world settings. When I interview a job candidate, I look for the ability to solve problems. If they use generative AI in that process, great. Those skills will certainly help push the company further. But the most important thing is that core ability to connect the dots between theory and practice. In addition to workshops and class projects, I hope that Simon can steer students toward consulting projects, competitions, and collaborations with local companies to help them connect the dots.

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

 

 

4 pillars of generative AI


Sigel Jeff

4 Pillars of Generative AI 

January 10, 2024 | By Dean Sevin Yeltekin

In the first installment of this Q&A series, strategy consultant Jeff Sigel ’01S (MBA) describes four pillars of generative AI usage and warns against common pitfalls. 

Sevin Yeltekin: You started your own consulting firm, Proprioceptive, several months ago. Can you walk us through your career journey to this point?

Jeff Sigel: I was originally a math and physics teacher, which I like to describe as the hardest marketing job I’ve ever had. After several years of teaching, I joined a consulting firm that introduced me to the world of marketing and inspired me to pursue a business education. Simon gave me the training and credibility I needed to grow in a new direction. I discovered brand management in the first year of my MBA program, landed an internship at Kraft Nabisco, and started a journey through food marketing and innovation that included stops at The Hershey Company and Cracker Barrel. 

At Cracker Barrel, I noticed a lack of communication between the finance and data engineering teams. No one was translating consumer data into actionable insights. I raised my hand and volunteered to build an analytics function to bridge the gap. Our CFO would often bring up the topic of integrating generative AI into our operations, in addition to machine learning (ML) tools we were already using for forecasting purposes, but like many organizations, we were not prepared to use those tools in a strategic way. At the end of 2023, I left Cracker Barrel to found a consulting firm called Proprioceptive with a vision for helping companies activate strategy, not just put something to paper. Generative AI is increasingly part of this work.

SY: How have you built expertise in generative AI?

JS: I am an avid listener of audio books. In 2023, I spent much of the year listening to books on topics related to AI and machine learning. My approach is to listen to everything at 2x speed and fly through without being too worried about picking everything up, because what I glean from one book will help me understand the next one better. Using an application called DataCamp, I have also taken weekly classes to learn to code with Python. To supplement this independent learning, I took a class offered by Dan Keating at Simon that brought a fascinating perspective to the table. I learned more about what ChatGPT can do in terms of running Python code and was inspired to explore multimodal generative AI in greater depth. 

SY: How can generative AI enhance human intellect rather than replace it? 

JS: When developing materials on generative AI for new clients, I walk them through four pillars of application:

  1. Knowledge task assistance—Tools like ChatGPT add incredible value when it comes to tasks like coding and report writing. I attended a recent conference looking at generative AI in the pharmaceutical industry, and most presentations touched on report writing. These reports are onerous and cost a fortune, but they are an integral part of every drug trial. Generative AI has the potential to streamline this process. 
  2. Enhanced idea generation—Generative AI can help my clients create in different modes: images, words, numbers, and sounds. Coming up with an image that fits a word is much simpler than taking a piece of music and generating an image that fits, but both are now possible. I also use AI tools for simple brainstorming. Today, I use ChatGPT to create a list of questions I should ask a prospective client about or generate images for marketing materials. Back in my food innovation days, I might have asked ChatGPT to come up with a way of packaging chocolate that allows someone to carry it around without it melting.
  3. Accelerated personalized education—Generative AI can help my clients bridge the gap     between functions. Imagine a salesperson who doesn’t understand what a data analyst does.     They could ask an AI assistant to listen in on a conversation and explain what that analyst is     saying, or suggest better questions to ask. There are endless examples of ways that generative     AI can become a tour guide in unfamiliar territory and help us work more effectively across     disciplines and functions.
  4. Automated human-like interaction—With the help of video generation platforms like         Synthesia, it is now possible to type in a script and watch a video of a computer-generated     person reading it in several languages. If a client is creating a series of training videos, this tool could dramatically improve accuracy and reduce cost.

SY: What are some common pitfalls you help your clients avoid? 

JS: You would never want to start with a hammer and look for ways to use it. In the same way, it is a mistake to take a generative AI tool that seems interesting and look for ways to apply it to business operations. As a marketer, I believe that you always start with the problem. Define the problem and search for a tool that can address it, whether or not it is related to AI. 

Just like clients can become too enamored of generative AI, they can also become too cynical. Maybe I’m too much of an optimist, but I view generative AI through a positive lens. Even with the tremendous social disparities in place today, think about the ways that innovations in fields like medicine have vastly improved living conditions. I’m currently consulting for a data analytics company that is building a new AI-powered app to help doctors create better patient care reports. Another client is working with machine learning models in the drug discovery space to predict new drug candidates. Humans will find a way to abuse every technological advance, but the general arc of history bends toward progress. The good will outweigh the bad. 

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.
 

What ChatGPT means for higher education


Chat GPT

 What ChatGPT means for higher education

April 26, 2023 | By Dean Sevin Yeltekin

 

ChatGPT, a chatbot released by Open AI last November, has captured the world’s attention with its uncanny ability to compose essays, write song lyrics, play games, answer test questions, and take on other tasks traditionally associated with human intelligence.

The academic community has responded to the advent of this new technology with a mixture of fascination and concern, keenly aware of its potential to transform the way knowledge is produced, conveyed, and retained.

In the Q&A below, Mitch Lovett, senior associate dean of education and innovation at Simon Business School, weighs in on the potential impact of Chat GPT on higher education.

What do you view as ChatGPT’s primary strengths and weaknesses?

Where ChatGPT seems to excel the most is predicting text that might be useful to the user based on information that is available. A student taking a finance class, for example, might ask it to explain and contrast specific kinds of financial tools. ChatGPT might not produce the most elegant answer, but it will produce a B+ answer in a short amount of time. On the flipside, ChatGPT will often make up sources or information while trying to predict something—and, so far, it’s not particularly good at math. It is also not as useful when information doesn’t exist, like in a situation in which someone needs to write the website copy for a brand-new product. When ChatGPT tries to venture outside the space of prediction, it becomes less effective.

Do you think it can ever fully replace human intelligence?

No, but I would be wary of underestimating technology like ChatGPT. A few years ago, we never would have expected to encounter an AI program that can create something people struggle to discern from human art. Over time, it will become capable of more and more. Right now, though, it is just making predictions. It knows that certain things go with other things, but it does not know truth from lies. It doesn’t have a sense of ethics. That’s an area where human judgment is indispensable.

What I think is most profound is how it is going to redefine expertise in some fields. We are going to use ChatGPT to write things for us—text, code, outlines—so that we can complete our work faster. That means some skills will become less important and others more important.

What are some ways ChatGPT can improve the quality of education at Simon?

For both students and professors, one of its primary functions will be to assist in search. Constructing literature reviews will become significantly less time-intensive when using ChatGPT to aggregate sources and summarize information on a given topic. And when we think about ways to integrate it into classroom assignments, like using it to write code or organize information to help solve a case study, it is clear that students will be able to go further in the topics they are learning about. They will be able to do more in an assignment and do it more efficiently. They may not be able to learn more information overall, because the human brain can only absorb so much in a short time, but they will learn different information and get more practice designing solutions rather than implementing them.

I can also imagine ChatGPT serving as a first-line teaching assistant (TA) or discussion board assistant in order to help students taking asynchronous classes more quickly and efficiently than their professors and human TAs. Having an outside curator of information to post questions, responses, and comments will enrich the asynchronous interactions that take place.

What are some of the downsides or pitfalls of using ChatGPT in an academic setting?

There are significant implications when it comes to academic honesty. Theoretically, a student taking an online asynchronous course could use ChatGPT to complete all their assignments and take their exams. Many faculty members at Simon already use various analytics tools to detect plagiarism—for example, by comparing student answers to see if any are unreasonably similar. But as ChatGPT produces writing that is increasingly indistinguishable from something a student would produce, it will become an arms race to detect the use of AI. Some professors will address this by switching to in-person, handwritten exams whenever possible, while also ensuring that the course content is so specific that ChatGPT becomes ineffective. These are strategies that involve trying to block ChatGPT. Others will embrace the use of ChatGPT, but will do so by adjusting assignments and exams to compensate, similar to the concept of allowing a one-page cheat sheet on an exam or holding an open book exam.

Of course, there is also the danger of placing too much trust in ChatGPT. A student taking an introductory course in accounting, for example, might ask it to answer an accounting-related question but lack the base knowledge to discern the accuracy of its answers. Without any background in the subject matter, it can be difficult to know if ChatGPT is producing an A- answer or a complete fabrication. This is where our definition of expertise becomes important. Part of educating students is helping them understand how best to use AI and evaluate when the AI might be producing meaningless or less valuable responses.

How do you expect ChatGPT to change the way students learn?

I find it helpful to think about the analogy of using calculators in math. My children’s elementary school found that allowing students to use calculators from an early age weakened their ability to do simple computations in their head, a skill that is helpful in doing more advanced math. In the same way, the introduction of ChatGPT might weaken students’ ability to write. Writing is certainly something we have traditionally viewed as an important skill. But how important is it in relation to the tasks we need people to be able to do in a world with AI?

If ChatGPT is always creating a generic setup to tailor, it allows students to avoid the mundane and repetitive—but doing mundane, repetitive tasks over and over might be helping them develop intuition and judgment. One of the most important tasks that confronts us as educators is figuring out how much of this mundane work is needed to become an expert when AI is available.

On the other hand, we may be overreacting to the fact that future students will learn differently from their predecessors. To be considered an expert on something, it will be less essential to recall facts that will always be at their fingertips and more important to apply judgment and critical thinking. There is little chance of stopping the integration of AI tools like ChatGPT into education, so our job is to decide what fundamental knowledge is necessary for mastering a subject and learn to train students accordingly.

What questions are we not asking about ChatGPT that we should be asking?

ChatGPT raises fundamental questions about truth that we must grapple with. This technology produces text that may or may not be accurate and presents it as fact. When asked the same question twice, it may produce contradictory answers if it incorporates new information in between searches. Will truth start becoming blurrier for the people who use it?

I also think about how search engines like Google form a layer between the user and a website. ChatGPT, on the other hand, will put a filtered version of a website in front of you. You won’t even need to visit a website directly. What are the implications when it gets things wrong? Where do we draw a boundary line between what is a search and what is not? There are plenty of legal questions to consider surrounding ownership and responsibility for the information that is presented.

Mitch Lovett


Mitch Lovett is the senior associate dean of education and innovation and a professor of marketing at Simon Business School.


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

Helping Health Companies Leverage AI


Healthcare Stock

 

 Helping Health Companies Leverage AI  

May 8, 2024 | By Dean Sevin Yeltekin

 

In this Q&A, analytics and AI consultant Puneet Kaur (MSF’16) reflects on her journey to making an impact at the intersection of healthcare and AI.

Note: This is Part 4 of 4 in our Alumni AI Q&A series. Click to view Part 1, Part 2, and Part 3

Sevin Yeltekin: Can you walk us through your career journey in the data analytics and AI space?

Puneet Kaur: By a stroke of luck, my first-ever gig was with EXL, a consulting firm focused on data science and analytics. I was drawn to the opportunity to be part of a functional business unit that leverages technology to drive change for a diversity of clients. The longer I was in the consulting world, the more inspired I was to obtain more business knowledge, particularly in finance, to dig even deeper for my clients and move up the ladder. I relocated to the U.S. to pursue a Master in Finance at Simon and returned to EXL after graduating. I primarily worked with large retail and banking clients as a hands-on data scientist in an environment where we were in the early stages of exploring AI applications. After I had worked my way up to the position of engagement manager, the pandemic hit and I began thinking about ways to make a broader societal impact. It was too late to become a clinician, but I realized I could support the healthcare industry with the skills I had already built. In 2021, I joined Tiger Analytics, a global leader in AI and analytics. Now, I am a client partner in the life sciences and pharmaceutical space.

SY: What are the types of problems you solve for your clients?

PK: My favorite thing about consulting is that I never solve the same problem twice. At EXL, I frequently worked on use cases related to analyzing customer spend. For a company to send one targeted email, for example, it must have confidence from data that the decision makes sense in light of customer profile and competitor landscape. Now, I am still focused on patient segmentation but in a different context. For example, some of my pharmaceutical clients come to me with questions about the vaccines they have developed. They are looking for data-driven insights on the demographics of the population opting for or against vaccination, on the roadblocks to vaccines access, and the touch points that have shaped a patient’s perspective on the vaccine. AI tools help us sift through data with the aim of developing a strategy that drives higher rates of vaccination. And there are plenty of similar AI applications in the industry. Many people hold off going to a doctor until they are unwell, but they walk around wearing smart watches that capture their healthcare data. Health companies can use AI to analyze this data in a way that improves preventative medicine and saves lives. 

SY: What is the greatest day-to-day challenge you encounter? 

PK: The world of AI is a madhouse. I get up in the morning and there is always another change, another technology. There is so much happening in this space that you are never caught up, even if you are at the top of your game and reading constantly. It’s a challenge to stay informed without becoming overloaded. 

SY: If you are hiring a student out of Simon, what specific skills are you looking for in relation to AI? 

PK: My expectations will depend on their level of experience. If I am considering a candidate with a few years of work experience, I would lean on them more heavily for technological ability, which they learn in the classroom, rather than industry expertise. AI is not a magic wand that we can wave to get results—there needs to be an input of solid, refined data before it’s useful. New hires need to get their hands dirty with data, ensuring that it is in good form for AI to function correctly. For someone more advanced in their career, I would be more focused on their ability to use AI tools to stitch a story together for the client.  

SY: How has your Simon education made you more effective at what you do?

PK: I moved from a small town in India to the U.S. without any family or friends. I graduated from Simon a more confident person, someone who could offer tangible skills in a dynamic corporate environment. I’m grateful to Simon for that experience.

Puneet Kaur is a Client Partner (AI: Data Strategy, Engineering & Data Science Consulting) at Tiger Analytics. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.
 

Research paper explores impact of AI on pricing decisions

Body

Research paper explores impact of AI on pricing decisions 

March 14, 2024| By Bret Ellington

Simon Business School is proud to announce the publication of a new research paper co-authored by Professor Jeanine Miklos-Thal, the Fred H. Gowen Professor of Economics and Management who is also a CEPR research fellow. The paper, titled AI, Algorithmic Pricing, and Collusion appeared in Competition Policy International (CPI). According to their website CPI is “an independent knowledge-sharing organization focused on the diffusion of the most relevant information and content on the subjects of antitrust, competition law, and technological regulation.”

The paper delves into the complex intersection of artificial intelligence (AI), machine learning, and pricing decisions, addressing concerns about their potential impact on consumer prices.

In recent years, advancements in AI and machine learning have raised questions about whether such innovations could facilitate collusive pricing practices among firms, ultimately leading to higher prices for consumers. However, the paper challenges these fears by providing a detailed examination of the actual pricing algorithms commonly used by firms.

Through extensive research, Professor Miklos-Thal and her co-author, Catherine Tucker, the Sloan Distinguished Professor of Management at MIT Sloan and an NBER research associate, argue that while certain pricing algorithms may raise competition concerns, others may actually undermine firms' ability to sustain collusive prices. The paper emphasizes the importance of understanding the nuances of different pricing algorithms and their implications for competition in the marketplace.
 

This groundbreaking paper sheds light on the complexities of pricing decisions in the age of AI and machine learning, underscoring the need for a nuanced understanding of pricing algorithms to ensure fair competition and consumer welfare.


Read the full paper here:

AI, Algorithmic Pricing, and Collusion

Bret Ellington

Bret Ellington is a senior copywriter and content creator for the Simon Business School Marketing Department.


 

Follow Simon Business School News for the latest articles in this series at Simon News & Highlights.

 

Subscribe to AI Initiative