Quoted: Peter Kim in The Christian Science Monitor
Kim says Trump’s unpredictable tactics — meant to project strength — are backfiring, as delays and reversals weaken his credibility and bargaining power.
Peter Kim studies the dynamics of social misperception and its implications for negotiations, work groups, and dispute resolution. His research has been published in numerous scholarly journals, received ten national/international awards, and been featured by the New York Times, Washington Post, and National Public Radio. He serves as a Senior Editor for Organization Science, as an Associate Editor for the Journal of Trust Research, and on the editorial boards of the Academy of Management Review and Negotiation and Conflict Management Research. He is also a past Associate Editor for the Academy of Management Review and past Chair of the Academy of Management’s Conflict Management Division.
Areas of Expertise
Programs
Departments
INSIGHT + ANALYSIS
Quoted: Peter Kim in The Christian Science Monitor
Kim says Trump’s unpredictable tactics — meant to project strength — are backfiring, as delays and reversals weaken his credibility and bargaining power.
Podcast: Peter Kim on The One You Feed
Based on his extensive research, KIM, professor of management and organization, chats with The One You Feed podcast how trust works and why it matters.
Quoted: Peter Kim in Christian Science Monitor
People forgive honest mistakes, but moral failure, KIM, professor of management and organization, tells The Monitor, is harder to overcome if company culture is perceived as not addressing the public's concerns.
NEWS + EVENTS
Marshall Professor’s Book on Trust Earns Widespread Acclaim
Peter Kim’s book has received rave reviews in multiple media outlets.
Marshall Faculty Publications, Awards, and Honors: July 2023
We are proud to highlight the amazing Marshall faculty who have received awards this month for their groundbreaking work.
RESEARCH + PUBLICATIONS
When our trust is broken, and when our own trustworthiness is called into question, many of us are left wondering what to do. We barely know how trust works. How could we possibly repair it?
Although past research has offered important insights into how people seek to maintain their moral standing, it has generally portrayed this process as a matter of aggregating essentially static interpretations of a target's discrete acts. The present research reveals, however, that such interpretations are often far from static, and that they can change more than targets realize as new events unfold. More specifically, we find that: a) people can discount the diagnostic value of a target's initial deed if that party commits a subsequent act of the opposite valence, b) this occurs when an initial good deed is followed by a bad deed but not when the order is reversed, c) this occurs when evaluating the actions of others but not when evaluating the self, and d) this actor vs. observer difference can ultimately produce divergent beliefs about the target's overall morality, trustworthiness and subsequent trusting behaviors. We also identify a key mediating mechanism for these effects (i.e., the retrospective imputation of nefarious intent). Implications for reputation management, as well as the maintenance and repair of trust, are discussed.
Negotiation is an important potential application domain for intelligent virtual agents but, unlike research on agent-agent negotiations, agents that negotiate with people often adopt unrealistic simplifying assumptions. These assumptions not only limit the generality of these agents, but call into question scientific findings about how people negotiate with agents. Here we relax two common assumptions: the use of assigned rather than elicited user preferences, and the use of linear utility functions. Using a simulated salary negotiation, we find that relaxing these assumptions helps reveal interesting individual differences in how people negotiate their salary and allows algorithms to find better win-win solutions.
Women earn less than men in technical fields. Competing theories have been offered to explain this disparity. Some argue that women underperform in negotiating their salary, in-part due to language in job descriptions, called gender triggers, which leave women feeling disadvantaged in salary negotiations. Others point to structural and institutional bias: i.e., recruiters make better offers to men even when women exhibit equal negotiation skills. As a final salary is co-constructed though an interaction between employees and recruiters, it is difficult to disentangle these views. Here, we discuss how intelligent virtual agents serve as powerful methodological tools that lend new insight into this psychological debate. We use virtual negotiators to examine the impact of gender triggers on computer science (CS) undergraduates that engaged in a simulated salary negotiation with an automated recruiter. We find that, regardless of gender, CS students are reluctant to negotiate, and this hesitancy likely lowers their starting salary. Even when they negotiate, students show little skill in discovering tradeoffs that could enhance their salary, highlighting the need for negotiation training in technical fields. Most importantly, we find little evidence that gender triggers impact women's negotiated outcomes, at least within the field of CS. We argue that findings that emphasize women's individual deficits may reflect a lack of experimental control, which intelligent agents can help correct, and that structural and institutional explanations of inequity deserve greater attention.
Management scholars have typically regarded the widespread instances of hypocrisy across business, religious, and political institutions to be motivated and strategic. We suggest, however, that hypocrisy may stem not only from people’s motivation to interpret and utilize information in a self-serving manner, but also from fundamental differences in people’s access to that information itself. More specifically, we present a multi-stage Theory of Ethical Accounting (TEA) that describes how this differential access to information, specifically about the self vs. others, can create an interrelated series of cognitive distortions in how people account for the same unethical behavior. TEA posits that such distortions can allow people to believe they are being fair and consistent when appraising the morality of the self and others, while actually being inconsistent in how they do so, and describes how this can ultimately make it harder to address not only hypocrisy but unethical behavior more broadly in organizations.
COURSES