Following the Emotional Well-being Data Trail: Part 1
Abstract
This first article in a two-article series introduces the Emotional State Indicator®, a vital tool in assessing emotional well-being, and summarizes the first three research studies conducted on it. This instrument measures the emotional operating system's balance, the primary benchmark of emotional well-being.
Keywords
Emotional well-being, Emotional balance, Measurement
Updated 092924
Table of Contents Show
Introduction
My journey with assessments is deeply rooted in my family. As the daughter of a psychologist and a sister to two, I have always understood the significance of test and measurement. My father would psychologically evaluate my three siblings and me each school year to ensure we were on track developmentally or to understand where challenges might occur. This testing was essential for me, as I was born with a medical condition that resulted in brain damage, making potential trouble spots a genuine concern.
I joke that my first words weren’t dada or mama; they were Stanford-Binet and Wechsler.
This early exposure to psychological assessments sparked my interest and set the foundation for my future work in the emerging discipline of emotional well-being.
As a leadership consultant and executive coach, I became certified in the Leadership Circle Profile® and EQ-i 2.0® emotional intelligence assessments. These instruments allowed me to offer my clients a practical baseline to understand where they were on their development path, which informed a realistic vision of their future. They also reinforced my appreciation for assessments as the starting point for a development journey.
Because of my assessment experiences, when I created what I thought was a new approach to emotional intelligence in 2020, I knew that an assessment had to be an essential component. I created it because I wanted it to be simple for people to identify where they are in their emotional maturity journey (current state) so they could decide where they want to go next (future state). It would make developing emotional maturity as simple as locate yourself, then grow.
While the assessment has had many names and iterations since that fateful day in 2020, it is and will forever be (at least in my heart) the Emotional State Indicator (ESI). It is the crown jewel of Emotional Intelligence 3.0® (EI3.0®) and the White-Bryan Gestalt of Human Wholeness (GHW).
As an aside, the name EI3.0 was born out of a misunderstanding. I initially set out to create a more efficient and practical approach to emotional intelligence. However, I crafted a body of work that includes three systems: a system of individual emotional well-being (EWB) called the System of Emotional Well-being (SEW), a system of cultural well-being for organizations called the System of High Performance, and a Core Well-Being Model (CWM) that complements SEW. Core well-being is where wholeness is achieved. Together, SEW and CWM form GHW.
There are derivatives of the ESI for organizational use, including the Cultural Dynamics Inventory, the Power Style Inventory, the Executive Presence Inventory, the Leadership Fit Inventory, and the Culture Fit Inventory. These variations are a testament to the versatility and adaptability of the information collected by the ESI. However, the research summarized in this article focuses solely on the ESI.
The ESI research studies summarized in this two-part article series were instrumental in my evolution as a practitioner and system designer. They clarified the concept of emotional balance, the primary benchmark of EWB, and the indicators with the most influence on it. In tandem with this research, to further test and develop the underlying theory of EI3.0, I applied what was emerging from the data with my coaching and consulting clients to see if it would help them develop emotional health more efficiently and effectively. It’s been my privilege to watch many of them have quick and meaningful breakthroughs using the insights collected on emotional balance and emotional well-being from this research.
As will become evident, I didn’t understand what I needed to do to conceptualize EWB, so I just started with traditional research methods. I ultimately grew frustrated with those methods. They weren’t the best approach for theory building. It took me a few rounds to recognize that something was amiss. With the benefit of hindsight, I can see that what I was doing without knowing was implementing a mixed-methods approach, confusingly blending a grounded theory research approach and an exploratory data analysis approach.
The understanding that the most suitable research approach was not being used emerged as I tried to reconcile the results received from my research consultants with the model in my head. The beautiful thing is that even the wrong research approach supplied meaningful data.
As the EI3.0 system matured and evolved, so too did the ESI. Likewise, as the ESI advanced, it propelled the conceptualization of my theory. These items were forged together in a refining fire, each dependent on the other for their existence.
I highlight the first three ESI studies in the rest of this article. As these studies are summarized, elements of GHW are revealed and touched upon. If they don’t make sense, don’t worry. Later Journal of Emotional Well-being articles will explain GHW and its structural elements.
The Emotional State Indicator
The first version of the ESI was premised on my belief that emotional intelligence is impacted by an individual’s emotional imprints of love and power and the comfort zone they create. Nine indicators shaped these three constructs. The three indicators for the love imprint are love, anger, release, and power. The three indicators for the power imprint are power, resistance, and individuality. Finally, the three indicators for the comfort zone are energy, awareness, and language. These nine indicators are explained in Emotional Intelligence 3.0: How to Stop Playing Small in a Really Big Universe.
The research approach used with this first version of the assessment was reliability and validity analyses.
Round 1
After a year of determining the factors to be included in the ESI and crafting questions that might measure those factors, the first study on the ESI (Version 1, formulated in 2020) occurred in the Spring and Summer of 2021 (Round 1). The purpose of this study was to consolidate a robust set of questions for the ESI and to validate its structure and reliability. Specifically, the goal was to find an approximately 25-item set of questions that measured psychometric properties of compassion, control, grace, optimism, purpose, and cooperation related to individual emotional balance (at least, that is what I thought was being measured). The research consultants who assisted with the data collection and statistical analysis reorganized and loaded the questions onto a set of well-known factors that had already proven reliable and valid.
Before data collection, a content item review was conducted to ensure that the questions asked were simplified, jargon removed, were not leading or biased, and were not double-barreled.
The Round 1 dataset of 652 respondents was analyzed, with the sample being randomly split into two even groups. The exploratory factor analysis (EFA) sample comprised 326 participants, and the confirmatory factor analysis (CFA) sample comprised 326 participants. The findings suggested that the ESI comprised the five factors and the 25 remaining items after eliminating low-load items. The results also indicated that the ESI had structural validity and consistent reliability across both samples. The factor structure determined in the EFA and confirmed using the CFA was replicated in two samples.
Part of the challenge of whittling the question list to 25 was omitting some questions that provided insights to me as an executive coach. I didn’t feel that omitting questions was the best approach from a coaching perspective.
Also, I wouldn’t say I liked that many of the eliminated questions related to the power construct, something I felt committed to as part of the model I was trying to build. Plus, compassion, control, grace, optimism, purpose, and cooperation were not the factors I felt shaped emotional balance. I was uncomfortable with what happened to my proposed model during the research process in Round 1. It felt like we were trying to fit the proposed model into existing parameters. I was creating something new, not trying to fit in with current ideas of emotional intelligence.
Standing in the divide between creating a psychometrically sound instrument and being unwilling to eliminate questions that didn’t perform well yet offered meaningful insights into a client’s emotional health, along with maintaining the model's integrity (from my perspective), became my work. The result was I found myself politely at odds with the research consultants. I feel confident that I am not the first researcher to find themselves defending their model against statistical methods (especially when you have unknowingly selected an inappropriate research approach).
Because of this tension, I kept tinkering with the questions and the constructs.
Round 2
A second study was conducted in 2022 (Round 2). Like Round 1, the Round 2 dataset of 408 respondents was randomly split into two even samples. The EFA sample comprised 204 participants, and the CFA sample comprised 204 participants. There were 18 rounds of item reduction to reach the final set of 27 items that comprised six factors and explained 59.7% of the overall variance. The factors were the same as in the 2021 study: compassion, control, grace, optimism, purpose, and cooperation.
Factor loadings were all strong, ranging from .480 to .855. Reliability measures for each factor were also strong, with Cronbach’s Alphas ranging from .696 to .817. These findings suggested that the ESI comprised the six factors and 27 items. The results also indicated that the ESI has structural validity and consistent reliability across both samples.
Once again, constructs were reconfigured, and questions were eliminated from the model. Since I was trying to formulate a new model, I felt dissatisfied with how the research went.
Resetting
In these first two rounds of research, the consultants eliminated low-performing questions to improve fit. It took both rounds for me to understand this process of discovering the best questions for increased model fit. Fortunately, my brother, Dr. Jerry White, and my sister, Dr. Janie Aristizabal, were sounding boards and guides. Based on their input, I adopted a two-tier model with sound psychometric and meaningful coaching questions. That doesn’t bode well for model fit, but it does make me a more effective well-being practitioner.
As for those questions posed for elimination by the need for increased model fit, I allowed my instincts to guide me. I had meticulously researched the background of the constructs included in EI3.0. I knew deep within that the questions I had were most likely the more compelling questions for measuring the construct of emotional balance.
I didn’t want to be closed-minded, so I reviewed the data and talked to my peers. Ultimately, some questions were deleted and reworded, and some remained unchanged.
Additionally, intuitively, and for no other reason, I felt the factors used in the first two rounds were not true to the constructs I believed were critical drivers of emotional balance, the key determinant of emotional well-being. I spent much time thinking about that and its implications for my theory. Ultimately, I decided to find a researcher who would use my constructs. When creating something new, defiance and stubbornness can be essential traits.
Round 3
With revised constructs and questions, another study was conducted in the Winter of 2023 (Round 3) with a different research consultant. I proposed using a factor structure of two emotional imprints (love and power) and the comfort zone they create and a factor structure of nine indicators (Anger, Awareness, Energy, Individuality, Language, Love, Power, Release, and Resistance) that are each related to one of the imprints or the comfort zone. The first approach was labeled the Imprint model, and the second one the Indicator model. The study data consisted of the responses of 506 participants.
The Imprint model showed a reasonable fit to the data, with poor fit on some metrics (CFI = .72, TLI = .72) and moderate-to-strong fit metrics (RMSEA = .049, SRMR = .08). The Indicator model also showed a reasonable fit to the data, with poor fit on some metrics (CFI = .81, TLI = .79) and moderate-to-strong fit metrics (RMSEA = .047, SRMR = .08).
A Horn Parallel Analysis was run to find an optimal model. The analysis started with 74 question items organized by Imprint, Indicator, and a new category called Role. A parallel study was run in each round of the analysis to determine the best number of factors. An exploratory factor analysis was then fit using this number of factors, and item loadings were inspected for strength of relation to each factor. The lowest two loading items were dropped, and the process was repeated until all remaining items were loaded to one factor, with no cross-loading occurring. That left 25 questions with the strongest loading values on the three factors of note, which were determined to be anger, universal embodiment, and personal growth. The model's fit was adequate, with a Tucker-Lewis Index (TLI) value of 0.913 and an RMSEA value of 0.04.
Once the reliability analysis was complete, multiple CFAs were conducted to validate the ESI further using the PANAS, Big-5 survey, and SD3 Dark Triad survey as external criteria or benchmarks. The idea was to test the extent to which the ESI scale relates to theoretical external constructs (convergent validity) or is uncorrelated with unique constructs (discriminant validity). Each CFA model assessed the ESI's convergent and discriminant validity.
The Round 3 results indicated that the ESI showed moderate to strong construct validity, with moderate to strong correlations among the items within each latent variable. These correlations were also not high enough to consider the survey redundant with constructs from the comparison surveys. This result suggests that the scale measures unique constructs that are not redundant with those measured by the external benchmarks. Overall, the study supported using ESI as a valid measure of emotional balance.
Because this research revealed that the ESI measured unique constructs, I point to this moment as the one when the ESI became one of the first scientifically validated measures of emotional balance, the primary benchmark of emotional well-being.
Once again, however, I refused to omit the questions with lower loading values from the ESI. I continued to believe (and still do) that some of those questions remain important for “reading the tea leaves” as an executive coach and emotional well-being practitioner.
After this Round 3 study, I focused on emotional balance as the essential benchmark of emotional well-being, updating some of the questions to reflect this focus.
Now that I knew that emotional balance was a real construct, I wanted to understand its relationship to emotional intelligence. More studies would follow to continue clarifying the model.
Conclusion
This first article in a two-part series summarizes the first three rounds of research conducted on the ESI. These studies revealed the ESI to be a reliable and valid measure of emotional balance, the primary benchmark of EWB.
The next article summarizes three more rounds of research pivotal to the development of the current version of the ESI and GHW.
Miscellaneous Notes
This section shares information on participant recruitment, data screening, and funding.
Participant Recruitment
A research consulting firm conducted Rounds 1 and 2. The participant recruitment pool was a set of panel respondents maintained by the consulting firm. At the end of the survey, participants were automatically redirected to a separate survey that collected names, emails, and phone numbers for incentives. The incentive disclosure informed respondents that completion of the survey provided an opportunity to win a $25 gift card with up to 6 chances to win.
In Round 3, participants were recruited via Amazon Mechanical Turk (MTurk), an online platform that allows a variety of participants to respond to the item pool. The participant pool was global, with some smaller countries excluded and 50% of the pool being from the United States. Participants were required to be over 18 and received $1 to complete the survey.
Data Screening
Each dataset was screened before data analysis in each round to ensure inclusion/exclusion criteria were met. In Rounds 1 and 2, participants were removed because they 1) did not consent, 2) were less than 18, or 3) dropped out at the beginning of the survey. Additionally, in all three rounds, participants were removed because 1) they completed the survey too quickly, 2) they took too long to complete the survey, 3) there was no variance in their responses, 4) they dropped out before completing the survey, or 4) they missed the answer to a question that was included to ensure a human was responding to the survey questions (for MTurk participants only).
Funding
All of the studies described in this article were funded by my husband and biggest fan, James W. Bryan. I am grateful for his belief in and support of my work. Thank you, dear one.
Downloads
Download the PDF version of this article.
Download terms used in EI3.0 (Revised March 30, 2025).
References
Chun Tie, Ylona, Melanie Birks, Karen Francis. 2019. "Grounded theory research: A design framework for novice researchers." SAGE Open Med 7: 1-8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6318722/.