Vibe or Light? Someone or All?: Effects of Feedback Modality and Who Receives Feedback on Meeting Support

We explore how meeting members modify their responses to feedback according to the feedback modality and who receives the feedback. We conducted a field study and compared four feedback conditions: three using vibrotactile modality (chair vibration) and one using visual modality (spotlight flashing). The three vibrotactile conditions differ in the feedback recipients: potential speaker (a member whom other members would like to hear speak next, or a member who is willing to speak next), current speaker, and all members. Regarding the modality, the vibrotactile modality provided a moderate level of distraction of members (while the visual modality was low enough to be ignored) and led to more turn-taking than the visual modality. Regarding the recipients, members felt more positively about feedback when potential speaker, rather than current speaker, received feedback. Also members resulted in more turn-taking when all members or current speaker, rather than potential speaker, received feedback.


INTRODUCTION
People relate differently to computers with different interfaces (Reeves and Nass (1996)). In future, when people and highly intelligent computers coexist symbiotically, how does the interface of such highly intelligent computers impact how people relate to computers and how people conduct their intellectual and creative activities? Meetings is an example of an intellectual and creative activity of people. Fifty percent of meeting time is unproductive and twenty-five percent is spent on topics unrelated to the agenda (Doyle and Straus (1993)). So that means it is not straightforward for meeting members to effectively conduct meetings by themselves (Doyle and Straus (1993)). Therefore, there has long been a strong demand for effective support for facilitating meetings using computers.
Previous work on meeting support using computers has explored how the computer interface impacts meeting members' responses to computers and how it impacts their meetings. In most existing systems, computers provide feedback on how actively each member has so far participated in a meeting (i) using the visual modality (e.g., DiMicco et al. (2004); Bergstrom et al. (2007); Kim et al. (2008); Nowak et al. (2012)), and (ii) to all members (e.g., (DiMicco et al. (2004); Bergstrom et al. (2007); Kim et al. (2008); Balaam et al. (2011)), to encourage or discourage their participation in the meeting. However, this approach does not necessarily lead to satisfactory results: (i) visual feedback sometimes distracts members, and (ii) providing feedback to all members sometimes makes members who are less active in their participation feel negatively about feedback.
In this paper, we consider meeting support using computers, and explore how meeting members modify their responses to feedback depending on two key factors of the interfaces: the modality used for feedback ((i) above) and which members receive feedback ((ii) above). To fully explore this, computers for meeting support must be highly intelligent and comparable to a human facilitator; however, such computers are beyond what is currently available. In addition, human social behavior is difficult to reproduce in a laboratory experiment. We therefore employ a field study using the Wizard of Oz method with a human facilitator. We obtained the cooperation of 17 office workers in a company to conduct a field study of their actual brainstorming sessions. In our field study, we considered two modalities used for feedback and three subsets of members to receive the feedback. Specifically, we compared four feedback conditions: three using the vibrotactile modality and one using the visual modality. The feedback conditions using the vibrotactile modality vibrate a chair of each member, and the feedback condition using the visual modality flashes a spotlight on the ceiling. The three vibrotactile feedback conditions differ in which members receive feedback: (1) a member whom other members would like to hear speak next, or a member who is willing to speak next (a potential speaker), (2) a member who is currently speaking (a current speaker), and (3) all members.

RELATED WORK
There exists much work that addresses supporting meetings using computers and explores how such support impacts the responses of meeting members to computers. Many existing meeting support systems monitor either verbal (e.g., Leshed et al. (2009) ;Tausczik and Pennebaker (2013)) or nonverbal (e.g., Balaam et al. (2011);Nowak et al. (2012); Sanchez-Cortes et al. (2012)) communication among members and provide realtime feedback to members regarding specific aspects of their communication (e.g., Soller et al. (2005)). In this section, we analyze previous work according to three criteria.
The second criterion is who receives feedback ((ii) in section 1). Many meeting support systems provide feedback to all members (e.g., DiMicco et al. (2004); Bergstrom et al. (2007); Kim et al. (2008); Bachour et al. (2010)). Members, who are less active in their participation, using such systems are reported to sometimes feel negatively about feedback; for instance, they feel frustrated about other members also receiving feedback and, knowing how actively they have participated in the meeting, feel forced to participate, and they feel alienated from being unable to participate fully (Bachour et al. (2010); Schiavo et al. (2014);Tausch et al. (2016)). This suggests that providing feedback to all members may not be appropriate for less active members. When some, not all, members receive feedback, how would members, especially those who are less active, feel about feedback? Previous work has not explored this.
The third criterion is whether feedback effectively facilitates meetings. How do members modify their responses to feedback when the feedback effectively facilitates meetings? The basic principle of previous work is to provide feedback on group dynamics (the balance of participation, primarily in speaking activities, among members; for example, how often and how long each member has so far spoken in the meeting (e.g., DiMicco et al. (2004); Bergstrom et al. (2007); Kim et al. (2008); Streng et al. (2009);Bachour et al. (2010); Tausch et al. (2016)), to make members aware of their group dynamics, and consequently to lead members to voluntarily modify their behavior (Tausczik et al. (2013)). Previous work has adopted objective measures, such as the probability of turn-taking (e.g., Kim et al. (2008); Terken and Sturm (2010)), balance of member participation (e.g., DiMicco et al. (2004); Kim et al. (2008); Bachour et al. (2010);Tausch et al. (2016)) and of types of remarks (e.g., Leshed et al. (2009);Snyder et al. (2015)), and members' subjective evaluation of these measures (e.g., DiMicco et al.  Tausch et al. (2016)). In some systems, individual members voluntarily modified their behavior in response to feedback, resulting in improved group dynamics (e.g., DiMicco et al. (2004); Tausczik and Pennebaker (2013);Tausch et al. (2016)). However, in some systems, less active members did not necessarily increase their participation, although more active members often decreased their participation (e.g., DiMicco et al. (2004); Bachour et al. (2010)). No previous work has explored whether feedback effectively facilitates meetings, considering feedback modality and who receives feedback, as described above.

RESEARCH QUESTIONS
We pose the research questions based on the above three criteria. Feedback modality (criterion 1) relates to RQ1, and feedback recipients (criterion 2) relates to RQ2. Facilitation of meetings (criterion 3) considering the modality relates to RQ1-2, and considering the recipients relates to RQ2-2.

FEEDBACK DESIGN
To simplify our analysis of how members modify their responses to feedback, we only use feedback in its simplest form. Specifically, feedback carries a simple message: either "please speak" or "please encourage someone else to speak."

Comparison with vibrotactile feedback
We compare the visual modality with the vibrotactile modality, as a number of previous work used the visual modality (i.e., "baseline"). We adopt light (room illumination) to implement feedback using the visual modality. Using light as feedback allows members to focus on their primary activities without being distracted, and so it is fair to use it in comparison with vibrotactile feedback.

Feedback Pattern
We design a feedback pattern that less distract meeting members.
Regarding vibration, we apply the guideline for designing vibration-based interfaces (Saket et al. (2013)). They found that three factors contributed to a user's perceived urgency of vibration alerts: the gap length between vibrations, the number of gaps, and the vibration length. They also found that the pattern of "short on (vibration) and long off (gap)" was perceived as the least urgent. We argue that a lower level of perceived urgency causes less distraction to the members, and we apply their findings to our study. In our a preliminary study, we varied the on and off lengths of "short on and long off" in a vibration pattern and chose the pattern of "one second vibration on and two seconds vibration off" to avoid distraction of members. To ensure that the members notice a feedback, we decided to use the pattern of three consecutive pairs of "one second on and two seconds off." Regarding light, we conducted a preliminary study and decided to use the pattern of three consecutive pairs of "one second light on and two seconds light off," which is similar to the vibration.

Members Receiving Feedback
We consider different subsets of meeting members to receive feedback, and discuss the member subsets that do not make members, especially those who are less active in their participation, feel negatively about feedback, and make members participate in a meeting. Meetings are essentially conversations. In most conversations, there is a turn-taking system, which operates in the following manner. In a conversation between two people, there is always one listener, who will always become the next speaker when the current speaker stops speaking. In a conversation between three or more people, the next speaker is selected based on the turn-taking rules (Sacks et al. (1974)) shown in Table 1. When a listener wants to speak next, he/she communicates his/her intention to speak through eye contact or a gesture to the current speaker, and he/she either is selected as the next speaker by the current speaker (rule (a)) or voluntarily starts speaking (rule (b)).  We now consider the following three scenarios, in which different members receive feedback, and examine which scenario makes less active members feel least negatively about feedback.
In the first scenario, a member whom other members would like to hear speak next, or a member who is willing to speak next (a potential speaker) receives feedback. Upon receiving feedback, a potential speaker voluntarily speaks. This means that a potential speaker takes action following either rule (a) or rule (b) in Table 1. In this scenario, a less active member may not feel negatively because other members are not aware that the member received feedback.
In the second scenario, a member who is currently speaking (a current speaker) receives feedback. Upon receiving feedback, the current speaker identifies a potential speaker among other members and encourages the identified potential speaker to speak. Namely, the current speaker follows rule (a). Identifying a potential speaker and encouraging to that member to speak are relatively straightforward, because the current speaker is likely to be a facilitator or leader. In this scenario, a less active member may not feel negatively because he/she receives feedback, not directly from the computer, but indirectly through the current speaker, a peer member.
In the third scenario, all members receive feedback. Upon receiving feedback, each member identifies a potential speaker among all members. If a member identifies him/herself as a potential speaker, he/she voluntarily speaks. If a member identifies another member as a potential speaker, he/she encourages the identified potential speaker to speak. In this scenario, a less active member may not feel negatively. This is because, in the former case, who is the potential speaker is not explicitly disclosed to other members, and in the latter case, he/she receives feedback indirectly through the peer member.

FIELD STUDY
We designed a field study in brainstorming sessions under the following four feedback conditions. We compared Vibe-All and Light-All to answer RQ1, and compared Vibe-PS, Vibe-CS, and Vibe-All to answer RQ2.

Vibe-Potential Speaker (Vibe-PS):
The system provides vibrotactile feedback only to a potential speaker. The feedback recipient is to voluntarily speak.

Vibe-Current Speaker (Vibe-CS):
The system provides vibrotactile feedback only to the current speaker. The feedback recipient is to identify a potential speaker and encourage the identified potential speaker to speak. Vibe-All: The system provides vibrotactile feedback to all members. Each feedback recipient is to identify a potential speaker. If a member identifies him/herself as a potential speaker, he/she is to voluntarily speak. If a member identifies another member as the potential speaker, he/she is to encourage the identified potential speaker to speak. Light-All: The system provides visual feedback to all members. Each feedback recipient is to take the same action as the members in Vibe-All.

Study Design
We employed a within-subjects (i.e., within-groups) design and assigned participants to four mixedgender groups, each with four or five participants. Each group performed brainstorming sessions in all four feedback conditions ( Figure 1). The independent variable was the types of feedback forming four conditions. The dependent variables were shown in Figure 3. With the three Vibe conditions, the seat of each participant's chair vibrated (Figure 2c). With the Visual condition, the spotlight on the ceiling flashed ( Figure 2b). To prevent participants from brainstorming on the same topic in subsequent conditions, four different topics were utilized. The feedback conditions were counterbalanced and the order of topics was randomized.

Setup
We partnered with a company and conducted a study in the field with brainstorming sessions that its employees hold as a part of their regular business, because it is easier to reproduce social behavior in the field than in a laboratory (Hornecker and Nicol (2012)). However, conducting a study in the field came with its own limitations: we were not able to control all variables such as the composition of groups, nor were we able to use a large number of groups or consider other feedback conditions.
All brainstorming sessions were held in the same meeting room (Figure 2a) in the company's building.   (Figure 2a).

Wizard of Oz Method to Provide Feedback
To fully answer RQ1 and RQ2, computers in meeting support must be highly intelligent and comparable to a human facilitator. However, such computers are beyond what is currently available. We therefore adopted a Wizard of Oz method.
During the session, the wizard observed participants who were in the meeting room, from a separate room. The wizard listened to live audio and watched live video from video cameras in real time and operated the feedback application we developed, to send feedback to participant(s). When the wizard touched the trigger button, the application sent a signal to the chair of the member(s) chosen to receive feedback or to the spotlight on the ceiling.
The wizard sent feedback (touched the trigger button) to meet the following requirement: -When the wizard identifies a member who meets either (1) or (2) below during a session, she always sent feedback: (1) a member who has been less active in terms of verbal behavior and whom other members would like to hear speak next, or (2) a member who is willing to speak next and expresses his/her intention to speak through non-verbal behavior (for example, facial expressions, gesture, or posture). -The wizard sent feedback at least once during each session to every member.
We hired a professional facilitator as the wizard, to help that the wizard properly identifies a member who meets above requirements.

Participants
Seventeen participants (12 males and 5 females, average age of 39, age 25-59) took part in the study. We assigned participants to four mixedgender groups, each with four or five participants. All participants work in the company we partnered with and know each other. They engage in research and development and regularly hold brainstorming sessions.

Task and Procedure
Each group conducted four sessions, each lasting approximately 40 minutes. Each group was given four topics to discuss: ideas to support employees (1) to perform their individual work, (2) to perform their group work, (3) to conduct activities during their breaks, and (4) to perform their work outside the company premises, in a manner that increases their work productivity and enhances their healthy lifestyle. This company designs and manufactures office furniture and provides solutions to improve work environment in the office. The four topics above were not prepared for the sake of our study but were the actual topics that the company employees were to discuss as a part of their regular business at the time of our study.
Before each session, we instructed the participants how our system behaves and what the feedback means ( Figure 1). Note that the participants knew which feedback conditions were used in the session. We also explained that, when participants receive feedback during the session, they could voluntarily decide whether and when to act on the feedback. During the session, the wizard monitored live audio and video from the camera and operated the feedback application. After each session, we asked participants to complete a 5-point Likert scale questionnaire. After all sessions were completed, we asked the participants to complete multiple choice questionnaire. We then conducted a semi-structured group interview with each group.
We also conducted a test to examine each participant's personality (how active he/she is, see "Introversion-Extroversion Index (IEI)" in section 5.6) after all sessions were completed. In addition, we conducted an informal interview with the wizard. Finally, this study was approved by the research ethics committee of our institution.

Measures
To answer our research questions, we used quantitative and qualitative measures (Figure 3).

Probability of Turn-Taking following the Feedback
Using recorded videos, we examined whether turntaking occurred following the feedback. Table 2 shows the criteria used to determine whether turntaking occurred. Using these criteria, we obtained    (2013)) uses the modified Gini coefficient (Weisband et al. (1995)) to measure the balance of participation among members. The Gini coefficient is a measure of inequality, ranging from 0 (perfectly equal) to 1. The balance of participation is obtained by subtracting the Gini coefficient from 1.
Using recorded videos, we obtained the balance of participation regarding speaking activities, such as the total speaking length (total time length of all remarks of each member) and the speaking frequency (frequency of speaking of each member), similarly to the existing work.

Time Percentage of Each Remark Type
Using recorded videos, we examined what types of remarks members made during the sessions, in the following manner. Using a modified version of the remark types defined by Leshed et al. (Leshed et al. (2009)), we coded each remark from members into six types (Table 3). We ignored backchannel (a short acknowledgement) (Den et al. (2011)), laughter, and filler. Each remark is separated by a turn-taking between speakers. The coding of the remark type was done independently by two coders. Cronbach's alpha by coders was 0.85 and coders settled disagreements through discussion. For each type of remark given by all members in a session, we obtained the time percentage of the remark type, defined as the ratio of the total speaking length of the remark type to the total speaking length of all remark types combined.

5-point Likert Scale Questionnaire.
The questionnaire distributed to the participants after each session contained 15 questions: three for distraction of participants, nine for members' positive or negative feeling, and three for facilitation of meetings. Each participant answered using the Likert scale of 5 levels (1: strongly disagree, 2: disagree, 3: neither agree nor disagree, 4: agree, and 5: strongly agree).

Multiple Choice
Questionnaire. The questionnaire conducted after all sessions contained three questions: "which system do you like best?", "with which system do you feel most comfortable?", and "with which system are you most satisfied concerning the productivity of the brainstorming sessions?" For each question, each participant chose one of the four feedback conditions.
Questionnaires with Optional Open-ended Questions. The above two questionnaires also asked participants to provide free-form comments on the feedback that they experienced.

Semi-structured Group Interviews
To understand how participants experienced different feedback conditions, we conducted a semi-structured group interview with each group after all sessions.

Introversion-Extroversion Index (IEI)
To explore how members' personalities (whether they are more active or less active) impacts members' feeling about feedback (RQ2-1) and facilitation of meetings (RQ2-2), we conducted an Awaji-Okabe introversion/extroversion test (Awaji et al. (1932)) with each participant, to obtain his/her introversion-extroversion index (IEI). This test consists of 50 questions. We equated extroverted and introverted with more active and less active, respectively 1 .
To address RQ2-1, we obtained the correlation of participants' IEIs with their ratings on Likert scale questionnaire (Q4-Q12) and with feedback condition selected in responses to the multiple choice questionnaire. To address RQ2-2, we obtained correlation of participants' IEIs with their probabilities of turn-taking following the feedback and with their ratings on Likert scale questionnaire (Q13-Q15).

Data Analysis
We analyzed the data collected in our study. Data on feedback provided to participants owing to errors in the wizard's operation and judgment were excluded from our analysis. Figure 3). Regarding the probability of turn-taking following feedback (Figure 4), the balance of participation among members (Table 4), and the time percentage of each remark type (Table 5), we conducted a one-way repeatedmeasures (RM) ANOVA (within group factor: feedback conditions) with post hoc pairwise comparisons through using the Bonferroni adjustment. We analyzed the correlation between participants' IEIs and their probabilities of turntaking following the feedback, using Pearson's correlation coefficients.

Group-level Data (rectangular boxes with a filled square in
Individual-level Data (rectangular boxes with a square in Figure 3). Regarding participants' responses to the 5-point Likert scale questionnaire (Table 6), we conducted a one-way repeatedmeasures (RM) ANOVA (within subject factor: feedback conditions) with post hoc pairwise comparisons through using the Bonferroni adjustment. We analyzed the correlation between participants' IEIs and the ratings they provided in the 5-point Likert scale questionnaire, using Pearson's correlation coefficients.
Regarding participants' responses to the ordinal scale multiple choice questionnaire (Figure 6), we used a χ2 test followed by a Ryan's multiple comparison test for proportions, because participants' responses are categorical data (data that relate the frequency of each target category to the total frequencies). We analyzed, using the correlation ratio, the correlation between participants' IEIs and feedback condition selected in responses to the multiple choice questionnaire.

Participants
Comments. We conducted qualitative inductive analysis of the transcripts of the group interviews and written comments provided in the questionnaires with optional openended questions.

RESULTS
Sixteen brainstorming sessions were conducted. Each session lasted 40.9 minutes, on average. The wizard provided feedback 21.9 times per session, on average. Each participant received feedback in a session the following number of times, on average: Vibe-PS: 6.1; Vibe-CS: 4.6; Vibe-All: 21.5; Light-All: 20.3. Error bars in the figures in this section show the standard error of the mean. Asterisks in figures and tables show significance differences (*: p<0.05, **: p<0.01, ***: p<0.001) in the post hoc test described in section 5.7. Figure 4 shows the probability of turn-taking following the feedback at three different time intervals (10, 20, and 30 seconds after the feedback). One-way RM ANOVA on the probability at these time intervals showed a significant main effect of feedback condition at all time instances (10 seconds: F(3,9) = 22.178, p = 1.71E-04; 20 seconds: F(3,9) = 10.00, p = 0.003; 30 seconds: F(3,9) = 7.983, p = 0.007). Post hoc tests revealed significant differences between the four feedback conditions, as shown in Figure 4. For all three time intervals, the probability for Vibe-PS was significantly lower than that for all other feedback conditions. At 10 seconds, the probability for Vibe-All was significantly higher than that for Light-All. Only one member receives feedback with Vibe-PS and Vibe-CS, whereas all (four or five) members receive feedback with Vibe-All and Light-All. Therefore, the probability of turn-taking following feedback would seem to be much larger with Vibe-All and Light-All than with Vibe-PS and Vibe-CS. However, our results differed from this intuition.

Probability of Turn-Taking following the Feedback
Pearson's correlation coefficients between participants' IEIs and their probabilities of turntaking were not significant for Vibe-PS, Vibe-CS, or Vibe-All. Table 4 shows the balance of participation among the members in both total speaking length and speaking frequency. One-way RM ANOVA on the balance of participation for both did not show a significant main effect of feedback condition.    Table 5 shows the time percentage for each remark type, the ratio of the total speaking length of the remark type to the total speaking length of all remark types combined. One-way RM ANOVA on the time percentage for each remark type did not show any significant main effect of feedback condition for any remark type.

Time Percentage of Each Remark Type
6.4 Questionnaire to Participants 5-point Likert Scale Questionnaire. Table 6 shows mean and one-way RM ANOVA results on the ratings of the 5-point Likert scale questionnaire. Participants agreed that Vibe-All distracted more participants than Light-All (Q2); Vibe-PS made it easier to understand the intent of the system than Vibe-CS (Q5); Light-All produced discussion results with which participants strongly agreed, and/or were strongly satisfied, than Vibe-CS (Q9); and Vibe-PS and Vibe-CS resulted in more diverse opinions from members than Light-All (Q15). With respect to Q4-Q12 and Q13-Q15, Pearson's correlation coefficients between participants' IEIs and their ratings were not significant for Vibe-PS, Vibe-CS, or Vibe-All.
Multiple Choice Questionnaire. Figure 6 shows the frequency rate of each of the four feedback conditions in the multiple choice questionnaire. Participants chose Vibe-CS least in all three questions. A χ2 test for each question showed a significant main effect of feedback condition for comfortability, χ2(3)=8.00, p=0.046. A post hoc Ryan test revealed that the value of comfortability for Vibe-PS was significantly higher than for Vibe-CS, p=0.008 (Figure 6b). For all three questions, the correlation ratio between participants' IEIs and feedback condition selected by participants was not significant.

Participants Comments
We conducted qualitative inductive analysis of participants comments and the following themes were identified: encouraging actions, sense of experience sharing, sense of participation, identifying a potential speaker, and sense of comfort. Sense of experience sharing refers to the sense of involvement that a member develops from sharing the same goals and values with other members of the group. Sense of participation refers to the sense of self-involvement that a member develops from addressing an issue of the group together with other members.

Feedback Modality (RQ1)
Encouraging Actions. Our results suggest that vibrotactile feedback encouraged members to take voluntary actions. In contrast, visual feedback using light did not.
[Vibration (vibrotactile modality)] P11: Vibration motivated me to help advance meetings. P15: I felt encouraged to listen and speak. P13: Vibration motivated me to speak.
[Light (visual modality)] P01: I felt less pressure with light than with vibration, and it led me to often ignore the light feedback. P02: Light did not encourage me to take an action. P08: Light made me feel that it was for someone else.

Sense of Experience Sharing and Sense of Participation.
Our results suggest that visual feedback using light helped to create a strong sense of experience sharing and a weak sense of participation. In contrast, the vibrotactile feedback led to a weak sense of experience sharing and a strong sense of participation. During a group interview, one participant commented on the difference that vibration and light created in his [Vibration (vibrotactile modality) vs. Light (visual modality)] P05: Light created a strong sense of feedback sharing. As it is obvious that all members received the light, I often felt "I do not need to take an action, because others also received the light and may take an action" and became dependent on others. On the other hand, although I knew that all members received the vibration, vibration is not visible and did not create a strong sense of feedback sharing. As a result, I hardly became dependent on others.

Members Receiving Feedback (RQ2)
Sense of Participation. Our results suggest that a potential speaker receiving feedback (Vibe-PS) helped to create a weak sense of participation. In contrast, either a current speaker (Vibe-CS) or all members (Vibe-All and Light-All) receiving feedback did not appear to have the same effect.
[Vibe-PS] P16: Even if I do not speak, others will not notice and will not think "that member has not spoken". P05: Even if I do not speak, it does not hinder the meeting.
Identifying a Potential Speaker. Vibe-CS, Vibe-All, and Light-All expect each member to identify a potential speaker. Our results suggest that, when the current speaker received feedback (Vibe-CS), he/she found it difficult to identify a potential speaker. In contrast, when all members received feedback (Vibe-All and Light-All), members felt positively about flexibility of identifying a potential speaker.
[Vibe-CS] P10: When I received a message from the system, it was hard to determine to whom I should encourage to speak. P09: I found it difficult to think about both what I was talking about and to whom I should encourage to speak. P06: This (Vibe-CS) may be suitable for someone with a certain level of meeting facilitating skills.
[Vibe-All, Light-All] P08: Ambiguous message (e.g., "this feedback may be for me or may be for someone else") enabled me to actively participate. P17: I felt that the system was flexible and allowed me to act based on my own will.
Sense of Experience Sharing. Our results suggest that all members receiving feedback (Vibe-All and Light-All) helped to create a strong sense of experience sharing and, as a result, members felt that it was easier to take voluntary actions.
[Vibe-All, Light-All] P03: I believe that all members receiving feedback contributed to the sense of experience sharing and made everyone feel that he/she should participate and advance the meeting. P10: Because everyone knows, it was easier to proceed with the conversation. P15: I felt that all members being aware raised the level of recognition. P03: I felt that it was easier to speak, because there was no pressure from being the only one who received feedback.

Distraction of Members (RQ1-1)
Although members felt significantly more distracted by vibrotactile feedback (Vibe-All) than visual feedback (Light-All), their ratings were not high for either (Table 6, Q2). In addition, for both types of feedback, members felt that they could mostly concentrate on the discussion even with feedback ( Table 6, Q3). Furthermore, members sometimes ignored the visual feedback (Section 6.5.1). Considering that feedback should neither distract members nor be ignored by members, we conclude that vibrotactile feedback provides a moderate level of distraction compared to visual feedback.

Facilitation of Meetings (RQ1-2)
Vibrotactile feedback (Vibe-All) encouraged significantly more active turn-taking than visual feedback (Light-All) immediately after feedback (Figure 4). This is because vibration encouraged members to take actions (Section 6.5.1) and helped them to organize more active turn-taking.
In addition, vibrotactile feedback (Vibe-All) helped to create a weak sense of experience sharing and a strong sense of participation and, in contrast, visual feedback (Light-All) helped to create a strong sense of experience sharing and a weak sense of participation (Section 6.5.1). The social compensation effect (Williams and Karau (1991)) and social loafing effect (Latané et al. (1979)) may explain these findings with vibrotactile and visual feedback, respectively.

Feeling of Members (RQ2-1)
When members received feedback as a potential speaker (Vibe-PS) rather than a current speaker (Vibe-CS), they felt significantly more positive, i.e., easier to understand the intent of the system (Table  6, Q5) and more comfortable (Figure 6b). This is because, with Vibe-PS, it is clear what the feedback recipient (the potential speaker) is expected to do: to speak voluntarily. In contrast, with Vibe-CS, the feedback recipient (the current speaker) is expected to identify a potential speaker and to encourage that member to speak. The current speaker who received feedback often found this difficult (Section 6.5.2). As a result, they felt it more difficult to understand the intent of the system and became less comfortable, so they felt more negatively about feedback.
Vibe-CS and Vibe-All both require members to identify a potential speaker, but our results show that members responded to them differently: there were no comments indicating difficulty in identifying a potential speaker with Vibe-All, unlike with Vibe-CS; there were even some positive comments appreciating the flexibility in identifying the potential speaker with Vibe-All. This is because, with Vibe-All, all members received feedback. This would lead to a bystander effect (Latané and Darley (1970)), and increased members' tolerance for requests from the system. We now discuss how participants' personality (whether they are more active or less active) affected participants' feelings about feedback. As discussed in section 2, with existing meeting support systems where all members receive feedback, less active members often feel negatively about feedback. On the contrary, in our study, participants' personality (IEI) did not impact the questionnaire responses for any of the three subsets of members receiving feedback (Vibe-PS, Vibe-CS, and Vibe-All) (Section 6.4).

Facilitation of Meetings (RQ2-2)
Either all members (Vibe-All) or the current speaker (Vibe-CS) receiving feedback resulted in significantly more active turn-taking than the potential speaker receiving feedback (Vibe-PS) (Figure 4) This is because, with Vibe-All, members tended to develop a strong sense of experience sharing, thereby making it easier for individual members to take action (Section 6.5.2) and members might increase their efforts not due to the social loafing effect (Latané et al. (1979)) but rather due to the social compensation effect (Williams and Karau (1991)). As a result, there was significantly more active turn-taking than with Vibe-PS. In addition, this could be explained by social facilitation 2 (Triplett(1898); Allport (1924)), although it is not observed in participants comments. Note that one member receives feedback when both the current speaker (Vibe-CS) and the potential speaker (Vibe-PS) receive feedback. Upon receiving feedback, the potential speaker (Vibe-PS) developed a weaker sense of participation than the current speaker (Vibe-CS) (Section 6.5.2). As a result, with Vibe-CS, there was significantly more active turn-taking than with Vibe-PS.
We now discuss how participants' personality affected facilitation of meetings. As discussed in section 2, in some existing meeting support systems, less active members did not necessarily increase their participation, whereas more active members usually decreased their participation. On the contrary, in our study, participants' personality 2 Social facilitation is a phenomenon whereby increased task performance comes about by the mere presence of others, who do the same task (co-action effect) or who is a passive spectator/audience (audience effect).
(IEI) had no impact on the probability of turn-taking following feedback in any of the three subsets of members receiving feedback (Vibe-PS, Vibe-CS, and Vibe-All) (Section 6.1).

LIMITATIONS
We now discuss some limitations of our study. First, our study did not include the baseline of the comparison: that is, no studies were conducted without feedback. Feedback conditions we used may or may not be more effective than the baseline in supporting meetings. Second, in comparing different modalities (Vibe-All and Light-All), each member received feedback "privately" with Vibe-All, and all members shared the same light feedback and received feedback "publicly" with Light-All. These "private and public" aspects of Vibe-All and Light-All may or may not have contributed to our findings, and it is not clear how much they contributed to the difference in the sense of experience sharing. When members "privately" receive visual feedback--for instance, through their smart phones--our findings may need to be revised. Third, our sample size is small. A larger sample size may change our results, such as the balance of participation, the time percentage for each remark type, and the correlation between participants' IEIs and some data, where there were no statistically significant between the feedback conditions.

CONCLUSION
We conducted a field study exploring how members modify their responses to feedback, when different modalities are used for feedback and when different subsets of members receive feedback. Table 7 and Table 8 summarize our findings on two key research questions. We hope our findings inspire designers, developers, and researchers for meeting support and other types of group collaboration support as well as vibrotactile interfaces.