Outlier Research Funding

iORB’s core mission is to nurture and grow outlier research. Consistent with this mission, iORB provides funding to support ambitious research projects that require additional resources but have significant potential impact. iORB aims to annually fund several outstanding proposals that will positively impact business and society.

A call for proposals is sent out each Fall as part of a competitive submission process and respected business scholars review the proposals. Based on the reviews, the iORB executive board makes final funding decisions.

The proposals for funding are evaluated according to the following main criteria:

  1. Is there potential for creative and rigorous research that has a likelihood of significant impact on business and society, and will result in publications in premier academic journals?
  2. Is the work of broad impact; i.e., is there a path forward for disseminating the results of the work beyond the proposer’s own academic community? 


Funded Outlier Research Proposals

Fall 2018

The Evolution of Gender (in)Equality in the Workforce: Evidence from the American Legal Sector, 1870-1962

Joe Raffie, Department of Management and Organization

Nan Jia, Department of Management and Organization

This project will study the evolution and emergence of gender equality (inequality) in the American legal services industry. To do so, we will build and examine a novel database that we are constructing from historical law firm directories. This database spans a near century of time,starting in the year 1870, a time when women in the USA did not have the right to vote and when virtually no women were licensed attorneys, and ending in the year 1962, at which time significant changes and advancements had been made (albeit not enough). The historical nature of this data and the time span it covers (92 years), provides an unusual and unique opportunity to examine the factors which contributed to (or detracted from) changes in gender equality, starting at a point in time when female labor force participation in this industry was effectively zero. Our goal is to learn from history in a way which can be applied to the plethora of equality issues we currently face and to generate knowledge which can be used to promote and enhance gender equality, diversity, and inclusion, within modern firms.



The Long Run Effects of Financial Sector Competition

Rodney Ramcharan, Department of Finance and Business Economics

The motivation for the data collection begins with the fact that influential theories in macroeconomics and finance predict that the structure of financial intermediation and the balance sheet of intermediaries can have a powerful effect on economic fluctuations and asset prices (Brunnermeier and Sannikov (2014), (He & Krishnamurthy, 2013). These effects can however be ambiguous. Greater competition in credit markets can generate more efficient intermediation, reduce borrowing costs and relax credit constraints for marginalized borrowers. This can lead to faster economic growth in the long run—the “credit is good for growth” hypothesis. But more competition in the financial system can also erode the profitability of incumbent financial institutions, leading to riskier lending and more unstable banking. In the most extreme case, increased competition in credit markets can foster an ex-post misallocation of credit to riskier borrowers, producing asset price booms and busts, as well as profound and longlasting shifts in financial regulation (Mian and Sufi (2009), (Favara & Imbs, 2015), and (Raghuram Rajan & Ramcharan, 2015)).



Foundations of Neural Networks

Jason Lee, Department of Data Sciences and Operations

In the past decade, neural networks have made a remarkable impact on application domains including computer vision, robotics, and natural language understanding. Specific advances from Deep Learning include AlphaGo, a superhuman Go-playing AI, strategy game playing, and realistic chat-bots. At their core, these applications all involve learning the parameters of a neural network via the stochastic gradient descent algorithm. This proposal lays out a program to study provable learning methods in neural networks by separating the study into two interrelated problems: 

(i) Optimization of Neural Networks. The loss function of neural networks is highly non-convex, yet standard SGD and its variants attain near global minima, as evidenced by zero training error. What properties (e.g. overparametrization) allow for gradient methods to converge to local or even global minima? Can we design the landscape by modifying the architecture and loss to allow for efficient training?

(ii) Generalization and Regularization in Neural Networks. Since commonly used neural networks are overparametrized, the model is able to perfectly interpolate the training data. In fact, there are innately many global minima that perfectly interpolate the training data. Common statistical wisdom suggests that most of these models will incur high generalization error; however, empirically SGD nds (near) global minima that generalize. Why does the stochastic gradient algorithm nd global minima that do not overt? Does the stochastic gradient algorithm induce an implicit regularizer? Can we isolate the regularizer and use it as an explicit regularizer to further improve generalization?



A Road to Efficiency Through Communication and Commitment

Joao Ramos, Department of Finance and Business Economics

Ala Avoyan, Indiana University 

Economic situations often require agents to coordinate their actions, and coordination failures leading to under performance are pervasive in society. For instance, firms have to make investment decisions with uncertain returns that depend on the amount invested by other firms; thus, coordination failure may limit economic activity, with firms choosing suboptimal investment levels (see,for instance, Rosenstein-Rodan (1943)). Focusing inside the firm, consider a team that has to hand in a joint report before a deadline. Each team member is responsible for a section, and the report is complete only when all sections are delivered. A team member would be willing to put extra effort to hand in her part on time, but only if she was certain all others would do it as well.

Coordination failure may occur because, although group members have common preferences over outcomes, to achieve the best outcome requires taking an action they are unwilling to take, unless others do it as well. This makes strategic uncertainty a relentless feature: although superior outcomes (e.g. all putting extra effort) can be reached, given the uncertainty about others’ strategy, the risk of mis-coordination makes those outcomes unattainable.

Although the benefits of overcoming coordination failures are obvious, it is not clear how institutions can mitigate strategic uncertainty. Given that institutions are rarely random, this is a natural question to be answered from an experimental perspective. Recent experimental research focuses on communication institutions—comparing players sending public to private messages before choosing actions, or one-round of messages to many rounds—with mixed results for efficiency. Given the lack of theoretical implications for many of the institutions studied, it is unclear how a particular institution could help agents coordinate on higher outcomes. Furthermore, even if an institution improves coordination in the lab, it is unclear which institutional feature is responsible for the success.

Working Paper:

A Road to Efficiency through Communication and Commitment



Opening up the Black Box of Auditing

Clive Lennox, Leventhal School of Accounting

Due to a lack of publicly available data, there is relatively little evidence on how partners are incentivized and what audit partners do during an audit. In this project we will use three unique and proprietary data sources to peer into this black box:

1) the equity ownership stakes of audit partners
2) the identities of review partners and engagement partners, and
3) the audit adjustments that are made to reported earnings during the audit.

We expect to generate two papers using these data. First, we will examine how the ownership stakes of audit partners affect audit quality. Second, we will examine why internal control audits can have adverse consequences for financial reporting quality. As explained below, these two research questions are very important for regulators and practitioners, as well as the academic community.



Valid Inference for High-dimensional Statistical Models

Adel Javanmard, Department of Data Sciences and Operations

As we often hear, we are living in the era of data deluge. `Big data' technologies have allowed the acquisition of vast amount of fine-grained data and their accumulation into large scale databases, at an unprecedented speed. Powerful hardware and software systems have also been developed to crunch these data and extract information from them. Due to all these developments, the data-driven approach has become de rigueur nowadays in almost every field. Given the dataset, one of the off-the-shelf software packages is used to fit a statistical model which is then used for prediction, discovering new associations between variables (e.g. a specific demographic variable with the future income), clustering, policy design, decision making, and so on. However, this trend is a double-edge sword; The increasing complexity of these data and of the algorithms used has made statistical models significantly less interpretable. Employing the derived models without a proper understanding of their validity can lead to large number of false discoveries, wrong predictions and massive costs. Consider a concrete example where the medical records of patients are used to develop a model for providing personalized risk score for a chronic disease. A high risk score can trigger an intervention, such as incentive for healthy behavior, additional tests, and medical follow-ups which are all costly.  Now, the question is how certain are we from the predictions made by this statistical model? What is their limit of validity? How biased is the resulting model? Closely related is the concern of reproducibility of the results. Researchers would like to know if their findings in a study can be successfully replicated in another study under the same conditions, not exactly but up to statistical error. Answering these questions for modern high-dimensional data has created a need for novel foundational perspectives on inferential thinking.


Fall 2017

Refugees and Entrepreneurship: A Comparative Country Analysis

Shon R. Hiaat, Department of Management and Organization

The global refugee crisis has affected many developed and developing economies. How to deal with this crisis remains an open question. Entrepreneurship has been proposed as a potent tool that can lift refugees out of poverty, integrate them into in new societies, as well as spur local economic growth. Yet, research in the area of refugee entrepreneurship remains scant and is limited to a handful of qualitative case studies. As a result, theory explaining the founding, performance, and survival of businesses organized by refugees in their host countries, as well as the mechanisms explaining how refugee inflows impact native entrepreneurship is limited. Empirically, we propose to address this issue by examining both refugee entrepreneurship and the influence of refugees on native entrepreneurship in the countries of Mexico and Jordan. These two countries are the primary hosts of refugees from Central American (Mexico) and Arab countries (Jordan), and thus provide an opportunity to conduct comparative research on the topic. Project investigators will use a mixed methods approach that includes panel data, social network analysis, event history analysis, regression analysis, and qualitative analysis.



Conspicuous Consumption and Status Signaling among Great Apes

Joseph Nunes, Department of Marketing

This research intends to test whether great apes engage in status signaling through conspicuous consumption, specifically by employing artifacts as positional goods. The project is interdisciplinary, involving a collaboration between researchers in marketing and comparative cognitive psychology. It seeks to access the mental processes that drive certain behavior in both humans and our closest living relatives, great apes (bonobos, gorillas and orangutans).1 The study of primates helps us learn about the origins of human behavior and trace evolutionary pathways in our thinking and behavior. By studying status signaling behavior in apes using artifacts and controllable behaviors, we hope to gain a better understanding of human behavior and the evolution of hierarchical social interactions.

What is the key research question?

Do great apes engage in status signaling through the acquisition and consumption (display) of positional goods?

Consumers often acquire higher-priced goods when lower-priced, functional substitutes are readily available. The most common explanation for this behavior has its roots in signaling theory, which examines communication between individuals. It is widely assumed that people who spend more than necessary for a distinctive product are engaging in conspicuous consumption intended to distinguish themselves from others whom they consider socially inferior (Veblen 1899). To the conspicuous consumer, such public displays of discretionary economic power are a means of signaling a given rank within the social hierarchy. It is critical to point out that conspicuous consumption is believed to be a form of signaling that has its roots in evolutionary biology.



Misinformation on Online Social Platforms: Informational Mitigations

Kimon Drakopoulos, Data Sciences and Operations

Misinformation on online media markets and social platforms is a phenomenon that drew the public attention after the 2016 presidential election. Ever since, online platforms are implementing several approaches to mitigate the effects of misinformation but little is known both theoretically and empirically regarding the effectiveness of these approaches.

In our latest work (Allon et al. (2017)) we develop a model for the consumption of content in online social platforms according to which agents engage with content that is closer to their beliefs at equilibrium. We show that this confirmation bias in an online platform leads to the following novel and counter-intuitive paradox: the more information the platform provides to the users, the less they learn at equilibrium. Therefore, platforms should design the size of the News-Feed in such a way so that learning outcomes of users is not compromised. Based on the finding from our paper, in the proposed research we plan to design a behavioral experiment to assess whether the insight from our theoretical model is accurate, in which case the design implication for the platform would be significant.

Furthermore, in our prior work from Candogan and Drakopoulos (2017) we theoretically study the optimal intervention from a platform that seeks to optimize the tradeoff between engagement and misinformation and establishes that without loss of optimality platforms can resort to simple, straightforward mechanisms with binary labels (”True” or ”Fake”). Furthermore, optimal mechanisms may label content differently across users depending on their position (centrality) on the underlying social network. On the other hand, it is unclear whether our results extend to the more realistic setting where users are (i) polarized and (ii) the network connections are among ”similar” users (homophily). In the proposed research we would like to understand the effect of polarization and homophily on the design of intervention mechanisms as well as on the equilibrium behavior.

Working Paper:

Entitled Information Inundation in Platforms and Implications 



Effects of Overconfidence on Decision Making: Technology Readiness Levels Assessments in NASA’s SBIR Program

Fernando Zapatero, Department of Finance and Business Economics

Andrea Belz, USC Viterbi School of Engineering 

The broad topic we are studying is the effect of overconfidence on economic decision making.

There is a long literature in finance on overconfidence and its impact on individuals’ decisions (especially CEOs), as well as the long-term effect on outcomes (for example, profitability of companies with overconfident CEOs). Although the academic community has reached some conclusions, there are no direct measures of overconfidence and the robustness of the conclusions in the literature is not obvious. The notion of overconfidence is undoubtedly elusive, but we plan to use our proprietary database as it may provide a more direct assessment of overconfidence than the proxies used in current research.

Viterbi has access to a database from the National Aeronautics and Space Administration (NASA) Small Business Innovation Research (SBIR) program, a federal subsidy of major importance to the technology entrepreneurship ecosystem. The data set includes ex ante and ex post technology maturity assessments in the form of technology readiness levels (TRL), a standard metric of the aerospace industry, measured by both the entrepreneur and by a NASA program manager. We propose analyzing the difference between these assessments entrepreneurs’ and project managers’—as a measure of overconfidence, and studying the effect on both the selection and the success of the proposals. We also have a comprehensive database of patents, a standard measure of success on research proposals.



Performing Large-scale Causal Inference in the Digital Age: Integrating Field Experiment and Big Data

Tianshu Sun, Department of Data Sciences and Operations

With the ongoing digitization process, firms are increasingly making data-driven decisions by leveraging randomized experiments and big data. Randomized experiment is the holy grail of causal inference but often costly and limited in scale. Observational data, on the other hand, is often readily available and large-scale but may lead to biased estimates of causal effects. How can firms combine the advantage of both randomized experiments and observational data while avoiding the shortcomings of each? How can firms design adaptive causal inference procedures for any intervention in any context, given the available observational data and the cost of new randomized experiments? In this proposal, I propose a new framework drawing a close analogy with semi-supervised learning, and introduce an estimation method (IBASE) that Integrates Big (observational) DATA and Small (randomized) Experiment for large-scale causal inference. The framework and method can be potentially applied to a wide range of applications, including but not limited to improving product feature design and pricing scheme, optimizing and personalizing marketing campaigns, and accelerating experimentation and sampling process.

Predicting Success in New Ventures: Connections, Experience, and Team Composition in Entrepreneurial and Established Firm

Milan Miric, Department of Data Sciences and Operations            

Noam Wasserman, Lloyd Greif Center for Entrepreneurial Studies     

Pai-Ling, Lloyd Greif Center for Entrepreneurial Studies                           

Yin Shelly Li, Leventhal School of Accounting                                                                                         

Frank Nagle, Department of Management and Organization

Firms create economic value by organizing people to translate an idea into a product or service. These organizations may be entrepreneurial startups or established firms engaging in entrepreneurial activities. However, nearly two-thirds of startup failures are due to “people problems”: interpersonal conflict between co-founders and or friction between founders and hires. This has been documented in earlier studies (Gorman & Sahlman, 1989; Kaplan & Stromberg, 2004), these studies have not fully explored what factors predict success in new ventures and how can we help founders and established firms better identify the right
combinations of people for their organizations? What skills should founders and employees


Fall 2016

Are More Diverse Firms More Successful?

Sandra Rozo, Department of Finance and Business Economics 

Using unique employee-employer micro data from the United States this research project will study the causal effects of labor diversity (measured as ethnic or racial fractionalization) on firms’ real outcomes including employment, sales, firm exit, output per worker, capital-labor ratio, productivity (measured as total factor productivity), and innovation. Taking into account the rapid globalization process and the increasing number of immigrants in the United States, the results of this analysis are of great value for our business school to enhance its stock of knowledge on the consequences of diversity for businesses and could be placed under our strategic theme of Global Focus. Most importantly, this will be the first study to use employee-employer matched micro data to approach this question.



The Political Economy of Social Media in China

Yanhui Wu, Department of Finance and Business Economics 

In non-democratic countries where information channels are scarce, social media has produced an enormous information shock to the society. It is widely debated whether social media can improve the accountability of governments and facilitate the democratic process in autocracies. This project addresses this debate by studying how social media affect political and economic outcomes in China. The primary goal of the research is to provide a rigorous assessment of the political role of social media in China with a focus on three areas of outcome: 1) collective action events such as strikes and protests; 2) corruption and monitoring of officials; and 3) censorship and government information disclosure.

The explosion of the use of social media in China also produces a data shock to researchers. The basic data set I will use in this project includes 13 billion posts published on the most prominent Chinese blogging platform from 2009 to 2013. Methodologically, I will extensively use modern statistical learning tools such as text mining and machine learning in combination with traditional causal inference methods. One application is to construct a series of social indices to measure the institutional quality and business environments across regions in China.


A Paradigm of FDR Control in High-Dimensional Nonlinear Models

Jinchi Lv, Department of Data Sciences and Operations

Yingying Fan, Department of Data Sciences and Operations

The wide availability of massive data in such diverse areas as marketing, economics, finance, operations management, genomics, etc. poses unprecedented challenges to statistical methods, theory, and algorithms. A common issue is that we have a deluge of explanatory variables, often many more than the number of observations, knowing that the outcome only actually depends on a small fraction of them. An important question in any such study is to select the variables or causal factors that are important in explaining outcomes. Note that traditionally we determine the variables that are statistically significant by considering the p-value that is output by the “regression” software. However, these p-values do not make any sense in these high-dimensional settings, and would lead to wrong conclusions. The authors propose to develop a novel method for controlling the False Discovery Rate (FDR) in high-dimensional non-linear models. The proposed method will provide an important step in the pursuit of key causal factors in a wide range of important applications in various disciplines with scalability and statistical guarantees. 



The Nomenklatura State Institutions in the Knowledge Economy

Nan Jia, Department of Management and Organization

A pivotal factor in the rapid surge in China’s indigenous innovations in recent years is the direct and powerful role of the Chinese government, which creates a very different institutional environment for innovation as compared to countries such as the U.S. This paper aims to understand how key features of the political governance in China’s political systems shape the incentives in developing innovations. Contrary to the ideal type of Weberian bureaucracy, at the heart of China’s state institutions is a nomenklatura model which generates incentives for state officials to promote certain ideas, for example to promote knowledge production in the 21st century. We plan to use longitudinal data from 1990 to 2015 covering all patents produced in each of the 333 Chinese municipal-level cities and 32 provinces (including province-level cities) every year, to study the relationship between an outstanding incentive feature of the nomenklatura governance system and the patenting landscape in China. We predict that the same incentive structure that resulted in excessive grain extraction and famine in the 1950s also produced greater activism in patenting following the national campaign promoting indigenous innovations—but with certain distortions: a larger number of patents at the expense of the quality or novelty of these patents. This project has the potential to bridge the gap between theories in political science and the economics of innovation literature on the topic of how state institutions influence economic outcomes (innovation outcomes in particular).



Optimization in the Small-Data Regime

Vishal Gupta, Department of Data Sciences and Operations

Paat Rusmevichientong, Department of Data Sciences and Operations

Modern decision making under uncertainty often requires making thousands of decisions simultaneously at a highly granular level in a time-varying environment. Because of these three features - large-scale, high-granularity, changing environments - the relative amount of relevant data per decision is often quite small. We term this emerging application setting the small-data optimization regime. This proposal aims to: (1) formulate customized methods for decision making in the small-data regime that exploit large-scale optimization structure, (2) promote the adoption of these methods by creating open-source software, (3) partner with the Operations Innovation team in the City of LA's Mayor's office to implement these methods on real-world problems, and (4) host a Hackathon/conference showcasing this software and the above real-world case-studies for academics and practitioners.

Working Papers: 

Small-Data, Large-Scale Linear Optimization with Uncertain Objectives

Data-Pooling for Stochastic Optimization



Institutional Knowledge and Local Information Advantage of US Versus Chinese Information Intermediaries 

T.J. Wong, Leventhal School of Accounting 

Using textual analysis of a comprehensive set of analysts’ reports and corporate news articles of Chinese firms, we propose to address three research questions in two related projects. First, when covering the Chinese listed firms, do the Chinese intermediaries (financial analysts or journalists) have local information advantage over their US counterparts as measured by forecast quality for the analysts and the level of bias and market response to the information generated by the analysts or journalists? Second, do the Chinese intermediaries focus more on sociopolitical rather than market and economic factors in the reports, and is this focus a key contributing factor to their local information advantage? Third, can the institutional knowledge about the sociopolitical factors be acquired through education or work experience? These projects would shed light on the information advantage of local information intermediaries in the literature. They would potentially impact practice as US intermediaries and investors are increasing their investment in emerging markets such as China but experiencing severe information asymmetry.



Digital Entrepreneurship and Innovation: Outlier Behavior in the Mobile App Ecosystem

Pai-Ling Yin, Lloyd Greif Center for Entrepreneurship

Milan Miric, Department of Data Sciences and Operations

This project aims to study:

1) how mobile app entrepreneurs finance their launch in ways that differ from traditional tech entrepreneurs, and

2) how mobile app developers employ novel business models.

The mobile app context presents a unique opportunity to identify many failed developers, allowing us to correctly measure the prevalence of different strategies and correctly infer the success of these strategies. Using a mix of large-scale empirical analysis and qualitative case studies, we examine the drivers of bootstrapping among entrepreneurs and the drivers of non-monetary and freemium business models in mobile apps. We accomplish this by merging two unique datasets to explore these questions. The resulting findings will more generally speak to digital and platform environments, where low-cost entry, low-cost production, and the presence of network effects lead to intense competition. We hope to help industry participants and investors in the mobile app industry as well as similar digital and platform environments better understand successful resource acquisition and business model options.