Multimodal Microanalysis: Scattergories! Video Clip

Georgetown University, 2020. Linguistics: Multimodal Interaction Analysis. Professor: Frederick Erickson.

It was a great privilege to work in a small group setting with the seminal scholar Frederick Erickson! Our course centered on learning to transcribe and microanalyze video – a complicated task, due to the immense amount of visual and auditory information captured through this medium. If you’ve ever wondered how researchers might represent data from video, keep reading!

Here, I’m presenting my final project, which not only shows you my final product, but also walks through my process for developing this multimodal transcription. The video is from a game of “Scattergories,” in which my mom, fiancé, two cats, and I participate. If you need a laugh and would like to watch our full gameplay, click here ?

I’ll begin by sharing the clip I microanalyzed and transcribed (in Scroll format), as well as a Narrative Description of it.

Next, you’ll find my Rough Transcript of the microanalyzed clip’s audio.

Then, you’ll find my Synoptic Chart, which is a higher-level analysis and transcription of the interaction before and after the clip (20 minutes). I’ve also linked this portion of the video recording.

Following that is my Scroll Transcription of the clip.

Finally comes my Analysis and Process Description Paper.

It’s difficult to get a sense of the scroll from the way it’s formatted here; please contact me if you’d like to see an easier-to-view file of my scroll or another portion of my project!

Microanalyzed clip:

Narrative description of microanalyzed clip

The clip for my microanalysis is extracted from a round of the game “Scattergories,” played by my mother (identified in the transcript as “Mom”), my fiancé (Chris), and myself (Ashley). We are seated side-by-side on our sofa. Also present is one of our two cats, Sascha, who sits on the arm of the sofa next to Mom. Nearly centered in front of Ashley is a small table holding some game components.

“Scattergories” is a game we have all previously played together; it involves rolling a letter, then individually writing down one item per topic specified by a game card list, for approximately 3 minutes. Answers are then publicly reviewed, which for our way of playing, typically involves a lot of time discussing and joking about answers, as seen here. This section occurs late in our answer-discussion of the letter “G” round and follows a brief interruption.

Shortly before, one of Mom’s responses included a GTO (automobile), prompting Ashley to sing part of the 1964 song “G.T.O.”, performed by Ronny & the Daytonas. Our other cat, Humphrey, who had been sitting on Ashley’s lap, jumped off and exited, which Mom took as an opportunity to leave momentarily to get a tissue. As soon as Mom gets up, Ashley recommences singing, this time to Chris in a silly manner and with exaggerated gestures, before hugging and rocking sideways with him. She resumes her previous position as Mom returns, but continues singing through the end of the verse.

This clip begins when Mom has finished sitting down. She and Chris begin the discussion of the next topic, “things people gossip about.” Ashley continues to sing the chorus, which is primarily vocables, rather than words.

Discussion of answers, singing, and laughter continue; singing ends when Ashley presents her answer (gum), which transitions the group to constructed dialogue between Chris and Barb about hypothetical ‘gum gossip,’ followed by personal stories that could ‘count’ as real-life instances of ‘people gossiping about gum.’

Late in the clip, Mom is ready to move to the next topic/answer. While Chris and Ashley continue to discuss, Mom moves her focus back to her game board; she then makes a very large physical ‘lean’ toward Chris, crossing in front of part of Ashley’s body. After a moment of eye contact with him, Mom leans back to her previous position, at which point she says “Ok,” followed by reading the next topic. Chris finishes his speech turn and Mom provides commentary as a way of concluding the discussion of the “gossip” topic.

Rough Transcript

Synoptic Chart

Synoptic Chart clip:


Scroll Transcription

Analysis and Process Description Paper


            For my microanalysis, I chose a 72 second clip from a 1 hour, 3 minute, 50 second recording of a family setting up, playing, and putting away the game “Scattergories.” The clip begins at 38 minutes, 52 seconds, into the recording, and for my synoptic chart, I mapped two complete rounds of gameplay: the one in which this clip appears and that prior to it.

           Playing Scattergories involves rolling a letter, then individually writing one item beginning with that letter for each category topic specified by the list on the game card, within approximately 3 minutes. Each list includes 12 topics; the list and letter change each round. At the end of the writing time, players review their answers with each other. Points are earned for each unique answer provided; for duplicate responses, neither player receives a point.

Background and Interaction-influencing Factors

           The recording participants include three game players: my mother (identified as “Mom”), my fiancé (Chris), myself (Ashley), and two cats (Sascha and Humphrey). Scattergories is a game we have previously played together, and this interaction exhibits our typical attitude toward gameplay – lighthearted, joking, and frequently entertaining tangential talk prompted by a response. I, in particular, write as many answers for a topic as I can think of, and we often brainstorm further ideas while reviewing those recorded during the timed period. As such, our answer discussion time and the answers we permit are likely more indulgent than may be typical for other players. During this interaction, we played 6 total rounds (using the letters B, O, R, G, S, A. The rounds mapped are R and G).

            All participants except Mom live together in the home where this interaction occurred; large-scale renovations (such as replacing the heating system and adding a bathroom) had been in progress for the previous nearly-two months that the family had been living there. The home was not fully unpacked or furnished, which contributed to the spatial arrangement established for playing, with the three players seated side-by-side on a sofa, facing the video camera, and a small table roughly centered in front of them. The table was primarily accessed by only two players (Ashley and Chris) during the writing portions and for resting game elements (e.g., individual game boards, pens). Also on the table were the game box lid, in which the letter die and hourglass were used. Mom held her game elements in her hands or rested them on her lap, both during the writing and answer discussion periods. She also managed the game components stored in the bottom of the box (placed on the floor), including changing out and putting away the category lists used. Ordinarily, we have played at a table and chairs, allowing for a more circular F-formation that permits easier eye contact with every player and equal access to game elements in a more concentrated o-space (Kendon 1990).

           This recording was made in mid-February while Mom was visiting for three nights – during which time, each participant worked on large home repair projects (e.g., moving and installing drywall), Mom and Ashley shopped for and purchased a wedding dress, and Ashley worked on assignments for graduate school. The game was recorded on the last day of Mom’s visit, shortly before she left for the airport. All of these factors influenced the overall tone of the interaction; though I cannot speak for the other participants, for myself, exhaustion, sleeplessness, and some sadness related to Mom’s leaving contributed to my behavior.

Transcription: Equipment

            The equipment used for my analysis included my computer and an iPad Pro and Apple Pencil. The most helpful computer software for reviewing video footage was ELAN (other programs used are mentioned later). In the future, I would likely work exclusively with ELAN, as it provides the most extensive viewing options – particularly valuable for capturing the timing of visual details precisely. For creating my scroll and synoptic chart, I used the Notability application on my iPad and Pencil exclusively; for my preferences, I cannot imagine more useful tools. I had nearly all of the flexibility and creativity working by hand affords (some users may not like writing and drawing on a screen as much as on paper) – and my skills were much enhanced with the program’s ability to draw shapes I would otherwise have struggled to make precise (e.g., straight lines). I was able to easily trace screenshots of particular moments by importing the photo file to Notability and drawing outlines on top of it. The color and texture options for pens and highlighters are extensive, as are the ‘paper’ backgrounds (e.g., various sizes of graph paper, which can be added, changed, or removed at any stage); mistakes or sloppy penmanship are easily erased without leaving marks or needing to start over; drawings can be copied/pasted and resized; and areas may be selected and moved, permitting layout changes without needing to start over. This latter step is somewhat time consuming, but allowed me to preserve the hours of work I had already recorded on the full length of my scroll when I decided to re-size my participant sections in favor of allowing more room for drawings and other notes at the bottom. The drawings better serve my analysis than the small, bird’s-eye F-formation markings I had originally planned, but I do not know whether I would have been willing to discard all of my work and begin again, as would have been necessary, had I been working on paper. Notability files can be exported as PDFs (among other choices), which allows for easy sharing and printing, and users can opt to store backups of their files on several free online cloud services; the files can be accessed and modified on other devices on which the user has installed and logged into Notability.

Transcription: Process and Observations

           In beginning this project, I first compared a number of segments that lasted between 60-90 seconds. I selected my region because it was one of the most active and varied in terms of types of participation and movements occurring – singing during conversation, rearranging bodies, overlapping speech – while also still involving some elements of game play. Further, I chose it because it is a section that makes me laugh – a useful attribute for something I planned to spend significant time working with. I reviewed some of the surrounding areas as candidates for the synoptic chart portion, but had trouble deciding, so returned to this step later.

            I next made my rough transcript, as I was unsure how otherwise to move forward. I first worked on simply typing the words and aligning concurrent utterances, then addressed matters of formatting. Because significant blocks of overlapping speech, singing, and laughter occur in this clip, I decided to use a quasi-musical score format, which both highlights and accommodates legible reading of overlapping elements. I modified this transcription style from one taught to me by Cynthia Gordon to include more detail – particularly to show greater precision in where overlap occurs and to incorporate a sense of the rhythmic pulse of the interaction (a topic which has interested me greatly since becoming aware of it through our class discussion and several course readings). As I formatted my table, I attempted to break utterances into beats and measures – following, though greatly simplifying, examples in Erickson (2004). Initially, I believed my ‘measures’ included approximately 8 equal beats, though in the numerous times I have since listened, I have felt less certain of this. I have revised them slightly to at least form roughly equal measures; in the future, I would like to work on this further, as I am very intrigued by the interactional significance of rhythm and timing (for instance, the timing expected for providing an answer or holding the floor during school activities, as seen in the clip analyzed for the Comparative Noticing Assignment and Chapter 3: “I Can Make a ‘P’” of Erickson 2004).

            I then returned to selecting the segment to use for my synoptic chart and found that two rounds of game play timed almost exactly to the 20 minutes recommended for that task (I define the boundaries of rounds as activities that relate specifically to a certain letter or list, including set up and transitional time between rounds). The portion I have charted runs from 22:06-42:10, with my microanalyzed clip occurring near the end of that time.

            I next worked on determining what to track and attempted several layout ideas for my scroll; those efforts were not promising, so I switched to taking written notes on the short video clip. When I began working, I alternated between QuickTime Player and VLC applications (I later used others before transitioning to ELAN; elaborated subsequently). I tried to apply techniques gleaned from our recent Comparative Noticing Assignment. I watched my video many times by scrolling, sometimes using the arrow keys to go minutely through frames (no audio in this stage). I decided to sketch how participants are sitting when the clip begins; I did this freehand on my iPad (our class discussion about tracing screenshots was helpful; this method proved to be much more accurate and efficient). Though it took me about an hour to do something I ultimately re-did (by tracing), it was actually very useful because it helped me notice details I had previously ignored. For example, the way Mom interacts with her tissue; the presence of the tissue not only impacts her behavior when she is wiping her nose – it also affects her other gestures as she speaks and prompts smaller physical shifts, then a full-body rearrangement when she puts it in her pocket. The absence of it, once in her pocket, allowed for greater access to her game board, which may have influenced the pacing of the subsequent talk; Mom looks back at her board immediately after settling into her new posture. She engages briefly in further talk before returning her attention to her board, followed by physically, then verbally cueing us to move to the next topic (toward the end of the clip). Though ‘tissues’ are mentioned overtly in other parts of the recording, there is no audible element that could indicate a tissue’s presence – and potential impacts – in the segment analyzed here. Without considering visual information, this factor would almost certainly be left out of any transcription made, losing key information about pacing and foci of attention. Thus, sketching an initial picture proved to be one of the most helpful early steps I took in preparing my scroll – not only because it provided a useful means of representing actions, but because it encouraged me to notice aspects I had overlooked.

            My next step was making notes related to action; I continued to scroll to get an overview of and means for describing the larger movements. I then focused on individuals, beginning with Chris because he moved the least of us. I watched slowly, primarily scrolling, beginning with his feet and legs, though also jotting down a few movements in other parts of his body. I planned to continue watching feet and legs because this seemed a manageable way to approach such a complex task – so moved on to assess myself. Because I was sitting cross-legged the entire time, I found little data to record. It was only through this process that I began realizing that I would need to track some different elements than were helpful for some other video clips discussed in class, as my interaction did not include as much movement as previously imagined. I moved on to Mom, beginning with her feet and legs, then making detailed notes of what was occurring in other, more active regions. I returned to recording general notes about myself – larger movements and gestures, plus places I wanted to look at in greater detail later.

            The above note-taking step proved somewhat inefficient; persisting in creating a rough layout for my scroll and recording notes there would have been more effective. The valuable information I had tracked in my notes became difficult to navigate as I added to them; additionally, once I began viewing in ELAN, I was able to get much more precise timestamps – all of which resulted in redoing large portions of work. In the future, I would likely keep my initial step of making a rough transcript, as that element contains specific markers like words to which a time stamp can be attached – and the smaller size is easy to work with (mine is 3 pages; my scroll is 24). Next, I would move directly to a scroll formatted with a constant timeline and adequate space for taking notes so I could better track exactly when things occur for each participant. In landscape orientation, I would likely allow 2 or 3 seconds per page, divided vertically, with a block for each participant dividing the page horizontally (unless the data, including number of participants, differed significantly from that explored here).

           Because I still was unsure of how to format my scroll, I next worked toward establishing a constant timeline by returning to my rough transcript, attempting to match my transcript to the video. The quasi-musical score layout and work I had previously done to ‘chunk’ beats made this step easier than I think it would have been with other transcription styles (or if I had previously ignored rhythmic aspects). I marked notes on my iPad, where I could work relatively quickly, and then made necessary adjustments to my Word document later. The main difficulty I had was finding (or using) software that allowed me to slow the video speed (not possible in QuickTime) and also display a precise time counter (not possible in VLC). Final Cut Pro (video editing software) and Logic Pro X (music editing software) provided the most detailed time counters of these options, and Logic was useful for displaying sound waves, but I couldn’t easily adjust playback speed. Ultimately, I worked between various programs, choosing the best one for the present task. Though I later had to revise some details, marking a constant timeline on my rough transcript was a helpful step that transitioned relatively easily to my scroll.

           I moved next to a somewhat inefficient, though useful attempt to capture details of focus of attention/gaze and eye contact, which I recorded onto my timed rough transcript. I began with Mom, then myself, and ended with Chris. Throughout my analysis, I generally focused on one main individual and element at a time; this meant repeating steps, but felt like a manageable way for me to work. I was particularly interested in capturing eye contact; in most cases, my video is distinct enough to clearly see when this is achieved.

           I then switched to working on my synoptic chart. Though I initially found it difficult to grasp what to track (particularly since my segment features relatively few changes of posture or other large movements), I developed a sense for this as I began watching more closely. Combining narrative descriptions of sections with the activity mapped in a heart monitor-style graph was an effective way for me to understand this task and provides a very useful way to see what movements coincide with or prompt those of others – particularly between humans and animals. In graphing, I focused again on individuals and played through at 4 times faster than the usual speed with no sound. I tracked Humphrey this way, then at regular speed, and took notes on his behavior – primarily movements of his head to look at particular things, ear twitches, and times when walking around or jumping on and off of the couch; he also meows occasionally while off-screen. I then experimented with ways of representing this (resulting in the depiction submitted). I repeated this step with Sascha’s movements. I then moved to Mom, since she appears to have the most motion of the humans; I used a mixture of scrolling and regular playback to track these actions. Mom has a few larger body movements that result from laughter, as well as some bigger gestures. Since she manages some game items, she must lean down fully to pick up and return these to the floor and reach to distribute or collect items from Chris and me. She also leaves the room briefly to get a tissue, then blows her nose for some of the video. Once she has the tissue (late in the clip), I ranked her movements as higher because of the additional, unique artifact she interacted with (a ranking I repeated when others interacted briefly with various objects). Next, I moved to Chris, who rarely moved his full body. His primary actions included patting the ottoman for Humphrey to jump onto, using his phone to look up the validity of a response, blowing his nose, being moved by me when I hug and rock sideways with him while singing “G.T.O.”, putting his pen in my ear, and again being moved by me when I lean on him toward the end of the segment. Lastly, I tracked my behavior, the most interesting aspects of which included drinking from my water bottle, interacting with the cats (they walk behind and sit beside me; Humphrey sits on my lap), making certain large gestures, singing, and interacting with Chris, as previously described. I finished my synoptic chart by highlighting and naming boundaries of events (game rounds and set up/transitions) and creating a key with relevant information.

           With this completed, I returned to working on my scroll and developed a rough layout similar to the version submitted. Though moving between projects was useful for planning and discovering unexpected ways of depicting action, it was also somewhat disruptive, as I lost continuity. In the future, I would likely try to complete stages rather than move back and forth. In designing my scroll, I began by attempting to draw bird’s-eye view symbols for participants (similar to those in Erickson 1983) to show gaze, focus of attention, or F-formations. I had difficulty drawing these shapes, and additionally, realized that this was not the most useful way to depict my data. Gaze seemed the most useful element, as our heads, and to some extent, our torsos, follow the eyes. Eventually, I created a gaze line drawing using various colors to show attentional focus and abandoned the bird’s-eye representation in favor of using drawings to show certain gestures and changes in body position (combined with narrative description).

           Around this stage, I moved to working with ELAN, which took some adjustment. Though I found the size of the viewing screen and ability to alternate between program windows somewhat restrictive, the switch aided my progress significantly. Working in increments of ¼ seconds was recommended for this analysis, but I was not able to watch that minutely in the programs I had been using. With the precision of ELAN, I began marking on my scroll. I started with audible utterances because those made it easier to track and plot other elements. Except when movements are very big or unusual, words are much easier to remember and use as markers; they convey more specific meanings which can be understood regardless of gaze location, whereas many visual cues go unnoticed when gaze lies elsewhere than on the movement-maker. This – plus the established systems for transcribing speech – likely contributes to why language receives such focus relative to physical behavior. Working with the video at 80% speed, I began with Chris’ speech, though also marked moments where his speech overlapped with others’. I transcribed Mom’s and my utterances together because of the numerous instances of latching speech and overlapping singing and laughter.

           I then tracked gaze more precisely: first Mom, then myself, then Chris. As mentioned before, I used different colors to indicate the object of attention[CW10] . Next, I transcribed larger movements, beginning with Sascha – combining narrative descriptions with traced drawings. I re-drew our opening positions, then our closing postures; though we move slightly out of these positions at times, we begin and end in relatively similar locations. This prompted me to ask what I could include that was most interesting or significant, but would be impossible to know from only listening to an audio recording or reading a transcript made from such a recording (as is typical in linguistics). From this guidance, I selected movements to draw and/or describe in more detail for myself, Chris, then Mom. Each layer I added revealed new aspects of our interaction I had previously not been able to notice, which I have attempted to identify on the scroll. For instance, I found numerous instances of multiple contextualization cues (Gumperz 1977), particularly regarding jointly-produced physical and verbal cues – Mom and Chris both nod or shake their heads while verbally producing aligning utterances; words spoken with marked intonation and intensity are often accompanied by larger gestures and other body movements – the synchrony serving to amplify their impact. Mom and I repeat exaggerated, complementary movements, as in the two times we almost simultaneously lean forward while laughing. I also noticed some ‘rebounding’ of bodies following changes in proximity (similar to the description in Kendon 1990), as when Chris reaches forward to put his pen in my ear (his largest movement): I first lean away from him, then lean close to him (my hand briefly touching his arm), before returning to my previous, ‘neutral’ position.

           The final new element I added to my scroll was an attempt at musical notation of verbal elements only for a 7-second chunk – which was much more complicated than I’d imagined. It was tricky to hear all of the elements (and would be considerably more complex when adding nonverbal behavior). Our interaction seems to have approximately one beat per second, as previously established in our discussion of Erickson’s work. I had trouble determining a meter, in part because the singing that just precedes this segment (at 16) conflicts slightly with our speaking rhythm. I noticed this mismatch when making my rough transcription and was surprised that my song did not establish the pulse of the interaction (or vice versa). I decided to work with quarter notes for ease and eventually settled on 2/4 meter. I attempted to draw notes and rests in the existing second-divisions on my scroll timeline so that the notes would appear roughly where they occur with the other elements, but this made it difficult to write (and read). I also do not believe 2/4 is the most useful meter choice, and I felt that I was fudging some of the timing in my attempt to create a rhythmic notation relatively quickly. I would like to work more on musical notation of interactions in the future (perhaps using an excerpt with less overlap) and also consult scores that incorporate elements of rap for visual guidance (for instance, Lin-Manuel Miranda’s Hamilton). I am not sufficiently familiar with rap to know whether natural speech is modified slightly to accommodate a steady rhythmic pulse (as it felt I was attempting to do) or if timing is accomplished in some other way.

           Another element I would like to further pursue is gaze. Most of our readings and course discussions have addressed the impact of the behavior of all participants on an interaction; many have looked at ways that gaze and apparent focus of attention impact fluency and a sense of being respected, understood, or appreciated (with practices varying across cultures). One of the earliest papers we read that specifically highlights ways gaze behavior of ‘listeners’ impacts the speaking behavior of ‘speakers’ was C. Goodwin’s (1980) investigation. This study, along with Kendon’s (1990) description of F-formations, and Hall’s (1966) discussion of proxemics and sensory input have all led me to consider the impacts my interaction’s close, side-by-side seating arrangement – particularly its effects on eye contact.

           For instance, I was surprised to notice how infrequently I made eye contact or gazed directly at Mom or Chris. When I did so, my location in the middle meant at least partially excluding the other person from my F-formation. I wondered whether my initial singing may have been prolonged by my location, which made it easier to gaze straight out – perhaps fueling my ‘entitlement’ to create and remain in a ‘world’ of my own, where the song was the most important area of focus.

           I also noticed that Chris was the primary human recipient of gaze, particularly for Mom. For much of the time when Chris speaks, Mom looks directly toward him. Though she responds to my contributions verbally and with laughter, we make only occasional eye contact. One element of this could be that Mom does not know Chris as well as she knows me; he also does not talk as much as I do – thus, she may have wanted to pay greater attention to his contributions. It seemed that she also directed her own answers more toward Chris; this may have been in response to my early, less interactionally-focused behavior (i.e., singing to myself), though spatial factors seem likely to have contributed. Mom’s making eye contact with me would require a less comfortable head position, and by looking toward Chris, she automatically includes me in her F-formation. Mom also directs an embellished retelling of a story about gum that happened the previous day to Chris, who was not present. This reminded me of the “Father Knows Best” dynamic described by Ochs and Taylor (1995), where mothers reported stories and directed information to fathers, making me wonder whether a gendered component could have been a factor. The dynamic shifts shortly after, as Mom seems to shepherd Chris and me in a motherly or teacherly manner (my childlike behavior likely also contributed to this).

           Toward the end of the clip, I orient slightly away from Mom and look more toward Chris (as well as at my game board) while Chris and I share an extended period of conversational focus, somewhat at the exclusion of Mom (though she may have wanted to move to the next topic independent of this, as she chooses to move her gaze back to her game board). Her desire to resume the game is clearly evidenced by her full-body lean and subsequent speech. Around 60 seconds, she looks toward Chris, then begins an exaggerated lean in his direction, crossing in front of me and entering the bubble of my “intimate distance – far phase” (Hall 1966, p. 117). Once eye contact with Chris (the current speaker) has been achieved, she leans back to neutral, says “Ok” (a cue she has previously used in a similar manner during the larger interaction) and reads the next category topic. Chris finishes his speech turn and Mom comments on his final remarks to close our discussion of the topic before moving on. This familiar-to-me behavior seems to index her positions of “mother” and “teacher” (her profession). I likely would have responded more immediately to Mom’s cues than Chris did, but since he continued his turn, I kept my attention with him rather than to returning my focus to the game, as directed by Mom (though perhaps if I had looked away, he would not have continued his turn).

           From my work transcribing, I found areas of particular interest (such as Mom’s lean), topics I hope to further explore (like gaze and rhythmic elements of interactions), and I learned a great deal about methodology that I will be able to hone and apply to future projects. I enjoyed working with data that involved loved ones relaxing and being silly, and I benefited from exposure to the processes my classmates took with their data, as well as the guidance and expertise of an eminent scholar in this field.


ELAN (Version 5.9) [Computer software]. (2020). Nijmegen: Max Planck Institute for Psycholinguistics. Retrieved from

Erickson, F. (1983). Money tree, lasagna bush, salt and pepper: social construction of topical cohesion in a conversation among Italian-Americans. In D. Tannen and J. Alatis (eds.), Georgetown University Roundtable 1981: Analyzing discourse: text and talk. (pp. 43-70). Washington, DC: Georgetown University Press.

Erickson, F. (2004). Talk and social theory: ecologies of speaking and listening in everyday life. Cambridge UK: Polity.

Goodwin, C. (1980). Restarts, pauses, and the achievement of a state of mutual gaze at turn-beginning. Sociological Inquiry 50(3-4), 272-302.

Gumperz, J. (1977). Sociocultural knowledge in conversational inference. In M. Saville-Troike (ed.), 28th Annual Round Table Monograph Series on Languages and Linguistics. Washington, DC: Georgetown University Press.

Hall, E. T. (1966). The hidden dimension. New York: Doubleday.

Kendon, A. (1990). Spatial organization in social encounters: the F-formation system. In A. Kendon (ed.), Conducting interaction: patterns of behavior in focused encounters. (pp. 209-238). Cambridge: Cambridge University Press.

Learning how to look and listen. (n.d.).

Miranda, L. (2015). Hamilton: an American Musical [MP3]. New York: Atlantic Records.

Ochs, E. & Taylor, C. (1995). The “Father Knows Best” dynamic in dinnertime narratives. In K. Hall and M. Bucholtz (eds.), Gender articulated: language in the socially constructed self. (pp. 97-120). New York: Routledge.

Scattergories board game. (1988). Hasbro.

Ronny and the Daytonas (1964). G.T.O. [Song]. Mala Records.