Humanagent coordination in a group formation game Scientific Reports
Human-agent coordination in a group formation game
The coordination and cooperation between human and autonomous agents in coordination games propose interesting issues on human decisio n-making and behavioral changes. Here, we will report the knowledge obtained from a group formation game in a small world network, which has different combinations between humans and agent players. In the experiment, human players were given incentives to prioritize their cluster by rewards, and agents' decisio n-making models were guided by pure cooperative games between human players. In the experiment, the players were grouped into three different setups to examine the overall effects of having cooperative autonomous agents within the team. As a result, human subjects adapt to autonomous agents, avoid risks, and divide the actions performed during the round of the game into selfish and cooperative behavior, efficiently the overall performance. It turns out to keep it. In addition, the results of the hybrid setup between the two humans and agents suggest that the group configuration affects the evolution of the cluster. Our knowledge shows that giving more control to humans can help maximize the overall performance of hybrid systems in the setting of purely and not. I am.
Similar content being viewed by others
Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma
Paper release May 19, 2022Modeling pluralism and self-regulation explains the emergence of cooperation in networked societies
Paper release September 28, 2021Quantifying effects of tasks on group performance in social learning
Paper release August 20, 2022Introduction
In digitized human society, the interaction between humans and artificial sel f-ruled agents is becoming increasingly common. For this reason, a deeper understanding and research are required about the consequences of various social situations and setups, such as games, health care, retail, and transportation , 3, 4, 5, 6. Considering the fact that human behavior is far from homogeneous, it may be difficult to predict its founding behavior in these systems. If you want to realize a system that provides a system that provides a system that can provide a system in which the cooperation and cooperation between humans and agents is desirable, the macro-level dynamics of the micro-level human, human-agent, agent-agent-Agent-agent perspective It is necessary to understand from. In addition, a benchmark model that takes into account the variation in human psychological preferences that affects the decisio n-making is 7, 8, 9, 10.
Games and online games, in particular, provide valuable frameworks for studying not only pure human groups but also dynamics of humans and agents. As an example, a human-agent game that requires human cooperation, such as the dilemma of a repetitive prisoner, is studying the perception that the human volunteer is an agent or a human. Interesting discoveries in this study were that if volunteers were told that they were playing with agents, the level of cooperation would decrease. Similarly, by introducing social networks into research designs, the no n-local effects of interactions in the game are lit. For example, SHIRADO and Christakis 12 study how human collective performance in an agent (or bot) tries to solve coordination games on the network, and are randomal in agent's actions. The degree has an impact on the results of the game. In other words, it has been found that the comprehensive performance of a group of humans and agents will improve the comprehensive performance of humans and agents. In addition, 13, 14, 15, which is being studied in the context of cooperation and evolutionary behavior, is also studying games based on the agent's acting agents.
In this paper, a hybrid system between humans and autonomous agents is investigated in a cooperative game setup on a virtual network. In order to observe what kind of impact on the results of cooperative autonomous agents, we use a model that conforms to human behavior observed from past experiments. In some cases, experiments are set so that agents are distributed to human groups in various ratio, and the dynamics of the results are compared with two contrasting experiments, huma n-only experiments and agents only. The framework used in this research and our pr e-research 10 is in line with the spirit of Kearns and colleagues (see 16 and the reference in it). In their studies, 1, 16, 17, which is studying the effects of network structures on the efficiency of human solving problems such as graph coloring and consensus. In addition, the group formation game is similar to the matching problem that thinks that two different group members form pairs for each other's benefit.
In experiments involving problem-solving tasks, several studies have explored the complex relationship between individual-level human behavior, group performance, and network properties20, 21, 22. It has been shown that coordination, cooperation, and other social behaviors in human groups can be described and analyzed by carefully designed online and incentive experiments. In our experimental setup, players are incentivized to position themselves in groups. The game can be placed in the context of game-theoretic studies on social group formation, e. g., games based on Schelling’s separation model2523, 24 and more generally hedonic coalition formation games26, 27.
Our overall approach includes two experiments: a previous study10 and this study. In the former, we conducted an online computer lab-based experiment with human subjects who were incentivized to form connected clusters or groups by cooperating and coordinating with members of other teams with no overlapping information on a small-world network. Building on the results of the previous study, we built a data-driven model of autonomous agents that reproduces the decision-making of subjects to “cooperate.” In our setup, we combine humans and autonomous agents in a game (more details below), using a different incentive scheme for the humans and a human model obtained from our previous experiment for the agents. Our broad motivation is to gain insight into the effects of including cooperative autonomous agents in a game where some degree of competition is allowed between teams that include humans. We note that, similar to previous studies of human-agent hybrid games, 12 in our hybrid game, the human players are not informed that they are playing with human-like agents. While each team is allowed to work towards a goal to maximize their own payoff, we examine whether the overall payoff can be maximized along with public goods28.
Materials and methods
Group formulation games are performed on fixed regular network 10 consisting of n nodes. Players are divided into groups of the equal size of M, and each group is identified in color. The general purpose of game players is to maximize the cluster size of your group by exchanging the positions in the network. Figure 1 (right) shows the network configuration and an example of a cluster. The game is played in a round system, in a round, a player of a certain color sends a request to another color player in the adjacent node to exchange location. These requests are rejected by players in the receiving node. The color of sending the request is changed periodically, so that each color will have the same opportunity to send requests. The maximum number of rounds that can be played in each game is R. Therefore, the opportunity for each player to request a request during the game will be the maximum R / M times if the request is not completed by the R-No. 1 round. Similarly, the opportunity for each player to receive the request is up to 2 R / m. In one round, nodes with different colors from the requested nodes can receive multiple requests at the same time. The number of actual interaction opportunities in the game is that the T < SPAN> group formation game is played on a fixed regular network 10 consisting of n nodes. Players are divided into groups of the equal size of M, and each group is identified in color. The general purpose of game players is to maximize the cluster size of your group by exchanging the positions in the network. Figure 1 (right) shows the network configuration and an example of a cluster. The game is played in a round system, in a round, a player of a certain color sends a request to another color player in the adjacent node to exchange location. These requests are rejected by players in the receiving node. The color of sending the request is changed periodically, so that each color will have the same opportunity to send requests. The maximum number of rounds that can be played in each game is R. Therefore, the opportunity for each player to request a request during the game will be the maximum R / M times if the request is not completed by the R-No. 1 round. Similarly, the opportunity for each player to receive the request is up to 2 R / m. In one round, nodes with different colors from the requested nodes can receive multiple requests at the same time. The number of opportunities for the actual interaction in the game is that the T group formation game is performed on a fixed regular network 10 consisting of N nodes. Players are divided into groups of the equal size of M, and each group is identified in color. The general purpose of game players is to maximize the cluster size of your group by exchanging positions in the network. Figure 1 (right) shows the network configuration and an example of a cluster. The game is played in a round system, in a round, a player of a certain color sends a request to another color player in the adjacent node to exchange location. These requests are rejected by players in the receiving node. The color of sending the request is changed periodically, so that each color will have the same opportunity to send requests. The maximum number of rounds that can be played in each game is R. Therefore, the opportunity for each player to request a request during the game will be the maximum R / M times if the request is not completed by the R-No. 1 round. Similarly, the opportunity for each player to receive the request is up to 2 R / m. In one round, nodes with different colors from the requested nodes can receive multiple requests at the same time. The number of opportunities for actual interaction during the game is T
In our previous research 10, the subjects played a group formation game to get cooperative rewards. Players were hoped to collectively achieve the overall configuration on the network that would eventually become the maximum cluster size (L) of the M group (color). In the experiment, each ⓐ (m = 3) colored group and ⓑ (l = 10), which are located on the regular network with a periodic boundary and randomized small world drink as shown in FIG. 1 (right). It was composed of human human players. In this game, the payoff function is based on the average group progress (ACP), and is measured by calculating the average of the normalization size of the three maximum cluster of each color. The end of the game was whether the player reached the maximum cluster (ACP = 1. 0), or when the number of rounds he played reached 21. In order to increase the number of replacements in the game, the initial setting of the network will be colo r-coded for graphs with the average group progress of about 0. 1, and then the number will not exceed 5, and will not exceed 5. A random small world link was added in between.
In order to understand and quantify the behavior of human beings in the game, we implemented and trained models based on the probability matching using local information such as the adjacent player's cluster size and each color. Furthermore, using this model and the parameters obtained, we evaluated the rationality and risk awareness levels shown by the subject in the experimental session. As a result, it was found that most of the games in the experimental sessions succeeded in forming the maximum cluster. The result obtained by changing the weight of the model parameters during the simulation is that if it is purely cooperative purposes, it is effective to make a decision and use of limited information, and risk. It suggests that the perception of is close to the optimal.
Table 1 Remuneration schemeThe experimental session of this research was conducted on a computer lab on the Campus of the University of Art to for 30 volunteers recruited in social media advertisements. Before the experiment, we got informed outlets from all volunteers using a signed consent form. Not all volunteers were minors. The experiment was implemented in accordance with the related guidelines, and the procedure was pr e-approved by the Research Ethics Committee of the University of Art (2017_02_bsen Experience). The practical settings were the same as the previous experiment 10, using OTREE 30 as a framework for running the game and limiting the interaction between players and the visibility of others' workstations. The graphical user interface diagram is published in the previous experiment 10. Before the start of the experiment, we provided games presentations and simple tutorials to the players, and introduced the games. The session lasted for four hours, and a total of 13 games were played. One game lasted up to 15 rounds and lasted for about 20 minutes. The experiment was structurally divided into thre e-game sessions, and the rewards were given as incentives according to the participating bonuses and players' performance. Like the previous experiment, the reward was given in the form of a movie ticket. The thre e-game session sho w-up bonus was one movie ticket, and if more than 27 points were obtained, the second movie ticket was given as a reward. The experimental session of this research was held on a computer lab on the Campus of the University of Earlt for 30 volunteers recruited by advertising on social media. Before the experiment, we got informed outlets from all volunteers using a signed consent form. Not all volunteers were minors. The experiment was implemented in accordance with the related guidelines, and the procedure was pr e-approved by the Research Ethics Committee of the University of Art (2017_02_bsen Experience). The practical settings were the same as the previous experiment 10, using OTREE 30 as a framework for running the game and limiting the interaction between players and the visibility of others' workstations. The graphical user interface diagram is published in the previous experiment 10. Before the start of the experiment, we provided games presentations and simple tutorials to the players, and introduced the games. The session lasted for four hours, and a total of 13 games were played. One game lasted up to 15 rounds and lasted for about 20 minutes. The experiment was structurally divided into thre e-game sessions, and the rewards were given as incentives according to the participating bonuses and players' performance. Like the previous experiment, the reward was given in the form of a movie ticket. The thre e-game session sho w-up bonus was one movie ticket, and if more than 27 points were obtained, the second movie ticket was given as a reward. The experimental session of this study was conducted on a computer lab on the Campus of the Earth University for 30 volunteers recruited in social media advertisements. Before the experiment, we got informed outlets from all volunteers using a signed consent form. Not all volunteers were minors. The experiment was implemented in accordance with the related guidelines, and the procedure was pr e-approved by the Research Ethics Committee of the University of Art (2017_02_bsen Experience). The practical settings were the same as the previous experiment 10, using OTREE 30 as a framework for running the game and limiting the interaction between players and the visibility of others' workstations. The graphical user interface diagram is published in the previous experiment 10. Before the start of the experiment, we provided games presentations and simple tutorials to the players, and introduced the games. The session lasted for four hours, and a total of 13 games were played. One game lasted up to 15 rounds and lasted for about 20 minutes. The experiment was structurally divided into thre e-game sessions, and the rewards were given as incentives according to the participating bonuses and players' performance. Like the previous experiment, the reward was given in the form of a movie ticket. The thre e-game session sho w-up bonus was one movie ticket, and if more than 27 points were obtained, the second movie ticket was given as a reward. the
The game is the time when the maximum payoff in one game, that is, when the player forms a size L cluster (point (point) ˶ = 20), or the number of rounds set first (˶ = 15 ) It ended when it reached. The initial network configuration, including the player's color and the added small world drink, was selected to have some clustering, though the start configuration was far from the completion before the experimental session. If the player could achieve the 27 points required for the maximum payoff in the first two games of the three game sessions, the complete incentive for that specific session has already been achieved, so 3 games. Your eyes are not played. These parameters were motivated by simulation using models (shown below) and our previous experimental research 10. The simulation suggested that the specified number of rounds could be achieved. In addition, by reducing the number of rounds, more games can be performed during the experimental session, so that more data can be obtained from various stages of games, such as initial formation of small clusters.
Setup A was composed of pure human players (n = 30), and became the base line of the mother group of players who participated in the current reward scheme and experimental session. Setup B and C were groups of people and agents consisting of 15 people and 15 autonomous agents, but the concentration of humans and agents in each color group was different (see Figure 1). These different groups were selected to discover a difference between the interactions of humans and agents during the hybrid setup, the changes in action and performance. Setup B is composed of three groups mixed with humans and agents (five people and five agents per group). Setup C consists of pure human groups (10 humans), mixed groups of humans and agents (5 humans and 5 agents), and only agents (10 agents). In addition to the experimental setup with humans as subjects, the game by 30 autonomous agents was simulated 100 times and named it D.
In Setup B, C, D, the design of the autonomous agent was obtained by applying the data of the previous experimental session to the following models. When requesting or accepting requests, the player assumed that it would associate the utility for the neighbor's selection (goal is a group). If you use a probability matching, the probability of choosing an option is
It isHere, γ (P_0γ) is an effect that does not send (does not move). By selecting, further simplified.<\lambda +\alpha s_i+\beta s_j+\delta \langle s(c_j) \rangle \>By selecting \), the experimental data could be applied to the logistic function. In the last formula, γ (S_Iγ) is a focus player's cluster size, γ (S_Jγ) is the cluster size of the selected player, γ (γLangle s (C_J) γRangleγ) is the color of the selected player. The average cluster size of the remaining neighbor to share (see equation 4). As a result of the fit, the parameter ∕ α ∕ β β δ δ δ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕. By using this model, you can mix agents and human players with pure cooperation strategies.
Figure 1(a) Configuration of groups in the experimental setup. Each icon represents five players, autonomous agents or bots. Setup A was composed only of human players, and hybrid setup B and C were composed of 15 people and 15 agent players, and the group was composed. The number of interactions of huma n-t o-human, huma n-t o-robot, and bots vs. bots vary depending on the group configuration of setup B and C. (b) Example of a snapshot of a game using a setup B. Initial configuration of games using setup B. The biggest cluster is red 7, green and blue 2. At the start of each game, each network configuration was unreasonable due to the restrictions of not having 6 points or more, and all nodes do not exceed 5.
Fig. 2 < Span> Setup B, C, D, the design of the autonomous agent was obtained by applying the data of the previous experimental session to the following models. When requesting or accepting requests, the player assumed that it would associate the utility for the neighbor's selection (goal is a group). If you use a probability matching, the probability of choosing an option isIt is.
p_omega = cP_
/P_0 & gt;, \ endHere, γ (P_0γ) is an effect that does not send (does not move). By selecting, further simplified.
By selecting \), the experimental data could be applied to the logistic function. In the last formula, γ (S_Iγ) is a focus player's cluster size, γ (S_Jγ) is the cluster size of the selected player, γ (γLangle s (C_J) γRangleγ) is the color of the selected player. The average cluster size of the remaining neighbor to share (see equation 4). As a result of the fit, the parameter ∕ α ∕ β β δ δ δ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕ ∕. By using this model, you can mix agents and human players with pure cooperation strategies.Figure 1
Results
(a) Configuration of groups in the experimental setup. Each icon represents five players, autonomous agents or bots. Setup A was composed only of human players, and hybrid setup B and C were composed of 15 people and 15 agent players, and the group was composed. The number of interactions of huma n-t o-human, huma n-t o-robot, and bots vs. bots vary depending on the group configuration of setup B and C. (b) Example of a snapshot of a game using a setup B. Initial configuration of games using setup B. The biggest cluster is red 7, green and blue 2. At the start of each game, each network configuration was unreasonable due to the restrictions of not having 6 points or more, and all nodes do not exceed 5.
Figure 2 Setup B, C, D, the design of the autonomous agent was obtained by applying the data of the previous experimental session to the following models. When requesting or accepting requests, the player assumed that it would associate the utility for the neighbor's selection (goal is a group). If you use a probability matching, the probability of choosing an option is
It is.
It isHere, γ (P_0γ) is an effect that does not send (does not move). By selecting, further simplified.
It isFigure 2
It isThe average number of interactions of games between different colors and human players in the experimental setup. The number of interactions between humans and agents is different due to the different agent concentration of color groups. The number represents all starting interaction (ie, sending requests). In the purely human setup A and the purely agen t-based setup D, the number of interactions started is the same. However, a different group configuration in setup B and C affects the number of starting interactions from the viewpoint of human vs and agents. The performance of each group in the game has not been able to form a large cluster, and other groups have begun more interaction to them to maximize their payoffs, and as a result. It should be noted that there is a situation in which they can coordinate their actions for cooperative interactions. Such a support for autonomous agents caused a smaller cluster size as a result of the tendency of coordinating interactions to break the cluster to promote other players' movement. In total, hybrid setups had more interactions on average on average than pure agents and human setups (A and D are both 61) (B is 90, C is 86).
Fig. 4 < SPAN> Average score in each round of different sets. The games using the setup A reached the maximum number of points in all four games, but the hybrid setup B, C, and setup D did not reach up to 15 rounds. Of the four setups, hybrid setup B was the worst. However, since the individual players belonging to one of the setup B color groups achieved the maximum rewards by the fifth game (that is, 27 points in total), the difference between hybrid setups is small. be. The error bar represents a standard error.
Figure 3
Discussion
The average number of games between different colors and human players in the experimental setup. The number of interactions between humans and agents is different due to the different agent concentration of color groups. The number represents all starting interaction (ie, sending requests). In the purely human setup A and the purely agen t-based setup D, the number of interactions started is the same. However, a different group configuration in setup B and C affects the number of starting interactions from the viewpoint of human vs and agents. The performance of each group in the game has not been able to form a large cluster, and other groups have begun more interaction to them to maximize their payoffs, and as a result. It should be noted that there is a situation in which they can coordinate their actions for cooperative interactions. Such a support for autonomous agents caused a smaller cluster size as a result of the tendency of coordinated interactions to break the cluster to promote other players' movement. In total, hybrid setups had more interactions on average on average than pure agents and human setups (A and D are both 61) (B is 90, C is 86).
Fig. 4 Average scoring in each round of different setups. Games using the setup A reached the maximum number of points in all four games, but the hybrid setup B, C, and setup D did not reach within 15 rounds. Of the four setups, hybrid setup B was the worst. However, since the individual players belonging to one of the setup B color groups achieved the maximum rewards by the fifth game (that is, 27 points in total), the difference between hybrid setups is small. be. The error bar represents a standard error.
Figure 3
The average number of interactions of games between different colors and human players in the experimental setup. The number of interactions between humans and agents is different due to the different agent concentration of color groups. The number represents all starting interaction (ie, sending requests). In the purely human setup A and the purely agen t-based setup D, the number of interactions started is the same. However, a different group configuration in setup B and C affects the number of starting interactions from the viewpoint of human vs and agents. The performance of each group in the game has not been able to form a large cluster, and other groups have begun more interaction to them to maximize their payoffs, and as a result. It should be noted that there is a situation in which they can coordinate their actions for cooperative interactions. Such a support for autonomous agents caused a smaller cluster size as a result of the tendency of coordinating interactions to break the cluster to promote other players' movement. In total, hybrid setups had more interactions on average on average than pure agents and human setups (A and D are both 61) (B is 90, C is 86).
References
- Figure 4
- Requests of human players by cluster size in different setups (a) and acceptance (b) activities and autonomous agents in setup D. All setups show similar activities attenuation, but noteworthy that the setup using autonomous agents has shown more relaxed and more relaxed activities than human players alone. This behavior is considered to be the result of the poin t-based reward scheme, the performance of the autonomous agent, and the result of the complete coordination that does not form a large cluster in the hybrid setup. Since the autonomous agent is based on the same model, its activities are also expressed in setup D. (C) Overall activities for each player in the experimental setup. Each marker represents the ratio of the number of times the player performs an action that has an opportunity to accept or accept in a specific setup. The distribution of the marker indicates that no human player requires swaps on all opportunities. It is noteworthy that there were some examples of the player accepting all the demands in Setup A and B.
- Figure 5
- Requests (a) and acceptance (b) strategy in experimental setups by cluster size. Each value is the difference between the normalized cluster size (ⓐⓐ) of the neighbors who share the selected neighbors and the selected neighbor's normalization cluster size (ⓐⓐ). The positive value shows a more cooperative decisio n-making in that the cluster size of the particular color is the smallest neighbor, and the negative value is the opposite. If the decision making is completely random, the value is 0. (C) The average strategy of the individual in the game. Each marker represents a player in a specific setting. The autonomous agent is represented by the average value of all agents composed of setup D. The negative value indicates that the average strategy of the player does not require the most useful neighbors, or accepts the demands from neighbors with the smallest cluster size. < SPAN> Human players' requests (a) and acceptance (b) activities for each cluster size in different setups, and autonomous agents in setup D. All setups show similar activities attenuation, but noteworthy that the setup using autonomous agents has shown more relaxed and more relaxed activities than human players alone. This behavior is considered to be the result of the poin t-based reward scheme, the performance of the autonomous agent, and the result of the complete coordination that does not form a large cluster in the hybrid setup. Since the autonomous agent is based on the same model, its activities are also expressed in setup D. (C) Overall activities for each player in the experimental setup. Each marker represents the ratio of the number of times the player performs an action that has an opportunity to accept or accept in a specific setup. The distribution of the marker indicates that no human player requires swaps on all opportunities. It is noteworthy that there were some examples of the player accepting all the demands in Setup A and B.
- Figure 5
- Requests (a) and acceptance (b) strategy in experimental setups by cluster size. Each value is the difference between the normalized cluster size (ⓐⓐ) of the neighbors who share the selected neighbors and the selected neighbor's normalization cluster size (ⓐⓐ). The positive value shows a more cooperative decisio n-making in that the cluster size of the particular color is the smallest neighbor, and the negative value is the opposite. If the decision making is completely random, the value is 0. (C) The average strategy of the individual in the game. Each marker represents a player in a specific setting. The autonomous agent is represented by the average value of all agents composed of setup D. The negative value indicates that the average strategy of the player does not require the most useful neighbors, or accepts the demands from neighbors with the smallest cluster size. Requests of human players by cluster size in different setups (a) and acceptance (b) activities and autonomous agents in setup D. All setups show similar activities attenuation, but noteworthy that the setup using autonomous agents has shown more relaxed and more relaxed activities than human players alone. This behavior is considered to be the result of the poin t-based reward scheme, the performance of the autonomous agent, and the result of the complete coordination that does not form a large cluster in the hybrid setup. Since the autonomous agent is based on the same model, its activities are also expressed in setup D. (C) Overall activities for each player in the experimental setup. Each marker represents the ratio of the number of times the player performs an action that has an opportunity to accept or accept in a specific setup. The distribution of the marker indicates that no human player requires swaps on all opportunities. It is noteworthy that there were some examples of the player accepting all the demands in Setup A and B.
- Figure 5
- Requests (a) and acceptance (b) strategy in experimental setups by cluster size. Each value is the difference between the normalized cluster size (ⓐⓐ) of the neighbors who share the selected neighbors and the selected neighbor's normalization cluster size (ⓐⓐ). The positive value shows a more cooperative decisio n-making in that the cluster size of the particular color is the smallest neighbor, and the negative value is the opposite. If the decision making is completely random, the value is 0. (C) The average strategy of the individual in the game. Each marker represents a player in a specific setting. The autonomous agent is represented by the average value of all agents composed of setup D. The negative value shows that the average strategy of the player does not require the most useful neighbor or accepts the demands from neighbors with the smallest cluster size.
- In this research experiment, a total of 13 games were held, twice a session to play a setup A game, twice a session to play 5 sets B games, and twice a session to play 4 sets up C games. The 13 games sent 591 requests from the subjects and received 549 requests. The experiment began with a group of 30 subjects playing a se t-up game four times, and then the subjects were divided into two sets, 15. The two players participated in hybrid setup B and C separately, and added bots to the team. This design allows each setup to maintain the network size to 30 (15 people, 15 bots, 15 bots, see Fig. 1). The agent appeared in the game with an indistingential distinction from the subject, and it was not announced that the players were playing with the agent until the experimental session was over, but this is the bias in the subject's decisio n-making. It was to reduce the potential of 12, 31, not as an intervention, but to evaluate "organic" adaptation. Subjects were effective in reaching the final configuration and the maximum score in each set up A game. < SPAN> In this research experiment, a total of 13 games were performed, two sessions playing four games in a setup A, twice a session to play 5 sets, and twice a session to play 4 setup C games. The 13 games sent 591 requests from the subjects and received 549 requests. The experiment began with a group of 30 subjects playing a se t-up game four times, and then the subjects were divided into two sets, 15. The two players participated in hybrid setup B and C separately, and added bots to the team. This design allows each setup to maintain the network size to 30 (15 people, 15 bots, 15 bots, see Fig. 1). The agent appeared in the game with an indistingential distinction from the subject, and it was not announced that the players were playing with the agent until the experimental session was over, but this is the bias in the subject's decisio n-making. It was to reduce the potential of 12, 31, not as an intervention, but to evaluate "organic" adaptation. Subjects were effective in reaching the final configuration and the maximum score in each set up A game. In this research experiment, a total of 13 games were held, twice a session to play a setup A game, twice a session to play 5 sets B games, and twice a session to play 4 sets up C games. The 13 games sent 591 requests from the subjects and received 549 requests. The experiment began with a group of 30 subjects playing a se t-up game four times, and then the subjects were divided into two sets, 15. The two players participated in hybrid setup B and C separately, and added bots to the team. This design allows each setup to maintain the network size to 30 (15 people, 15 bots, 15 bots, see Fig. 1). The agent appeared in the game with an indistingential distinction from the subject, and it was not announced that the players were playing with the agent until the experimental session was over, but this is the bias in the subject's decisio n-making. It was to reduce the potential of 12, 31, not as an intervention, but to evaluate "organic" adaptation. Subjects were effective in reaching the final configuration and the maximum score in each set up A game.
- We found that the two hybrid setups performed weakly in team scoring. In particular, setup B had three game sessions and entered the final game before all players had earned the maximum payoff. This difference between the purely human setup A and the hybrid setups B and C suggests that the adaptation of the human subjects to the autonomous agents’ decisions was suboptimal, or that the autonomous agents’ strategies were suboptimal for reaching the objective of a given payoff function when played with human subjects. This incompatibility could be the result of fitting the model to data from a purely collective game, which resulted in the autonomous agents’ objectives differing from those of the current game. The games in setup B performed significantly worse due to the high number of interactions between agents resulting from the difference in the composition of agents and humans in the color groups (see Figure 3). Also, the autonomous agent model in our previous work10 included a stability rule that prevented agents from sending requests between two large clusters (i. e., clusters with a size larger than ⊖60⊖% of the maximum possible value). During the experiments, agents were generally more active in terms of sending requests, despite this stability rule being applied. This high activity caused instability, resulting in the breakup of profitable clusters.
- We measure players' behavior using measures of activity, risk aversion, and rationality of the actions taken, based on an agent-based model implemented in a previous study10. Activity corresponding to requesting an exchange of places with a nearest neighbor or accepting one of such requests is measured as the proportion of performing that action whenever it is permissible for the focal player. Thus, we define the activity of player i as:
- a_i = Γ
- a_i = \frac
- i>, <i>, <i>, <i>, <i>, <i>, <i>, <i
- Here, (N_i) is the actual number of times that player i chooses to interact with a neighbor, and (a_i) is the total number of times that player i could have chosen to interact. The quantity (⊖a_i⊖) is measured separately for requesting and accepting actions. For example, a requesting action of 1. 0 indicates that the player sent a request every time he had the option. The concept of risk aversion is very similar to activity, but it is measured as a function of the player's cluster size. The lower this value, the more the player is willing to risk losing the cluster size and ultimately the points that the player already has.
- a_i(s) = Γ
- a_i(s) = \frac
- a_i(s)& gt;, \end
- Here, (N_i(s)⒜) is the number of cases with cluster size s that interact with a neighbor, and (⒜mathcal _i(s)⒝) is the total number of cases with cluster size s that are ready to interact. Again, (a_i(s)⒜) is defined separately for requests and acceptances. We define player rationality as follows: We measure the amount of focal player i sending and accepting requests to neighbor j:
- U
- U_j=³³ s(c_j) ୧rangle - s_j. \end
- where ∕∕ is j's color, ∕langle s(c_j) ∕rangle ∕ is the average cluster size of i's neighborhoods with the same color as ∕∕, and ∕∕ is j's cluster size. Note that all the cluster size information explained above is available to i but not to j. The larger the difference between (c_j) and (s_j), the greater the expected payoff for neighbor j if player i facilitates the exchange. Thus, a more positive value indicates more cooperation on the part of player i. In a purely cooperative setting, this quantity reflects the rational choices and cognitions of players10.
- In the previous experiment, I realized that the closer to the purpose of forming the maximum cluster, the less activities in the sending and reception of players will decrease. If there is a different poin t-based payoff function, the player performed the same action when sending a request, but when accepting the request received after achieving a certain number of points, a more relaxed decision. Showed (see Fig. 4). After reaching the necessary cluster size, more strict acceptance actions indicate that the player understood the payoff functions and adjusted the decision after reaching a useful cluster size. Also, this increase in your actions can sacrifice your position to help other small clusters to optimize your points, and as a result, other groups. It indicates that if you reach a larger cluster, you can get more points. However, due to the game restrictions and the sample size obtained, there are some situations that rarely appear during the game, which can be seen from the fluctuations due to the lack of cluster size 8 players in setup B (Fig. 4). Evaluation of the individual activities of the players in each setup showed that there was no significant difference between the entire setup, and that there was no difference in the behavior of the players in terms of overall activities (see Fig. 4). 。 However, as can be seen from Fig. 4, the subject's activities were adapted to the setup D. < SPAN> In the previous experiment, I realized that the closer to the purpose of forming the maximum cluster, the less activities in the sending and reception of players will decrease. If there is a different poin t-based payoff function, the player performed the same action when sending a request, but when accepting the request received after achieving a certain number of points, a more relaxed decision. Showed (see Fig. 4). After reaching the necessary cluster size, more strict acceptance actions indicate that the player understood the payoff functions and adjusted the decision after reaching a useful cluster size. Also, this increase in your actions can sacrifice your position to help other small clusters to optimize your points, and as a result, other groups. It indicates that if you reach a larger cluster, you can get more points. However, due to the game restrictions and the sample size obtained, there are some situations that rarely appear during the game, which can be seen from the fluctuations due to the lack of cluster size 8 players in setup B (Fig. 4). Evaluation of the individual activities of the players in each setup showed that there was no significant difference between the entire setup, and that there was no difference in the behavior of the players in terms of overall activities (see Fig. 4). 。 However, as can be seen from Fig. 4, the subject's activities were adapted to the setup D. In the previous experiment, I realized that the closer to the purpose of forming the maximum cluster, the less activities in the sending and reception of players will decrease. If there is a different poin t-based payoff function, the player performed the same action when sending a request, but when accepting the request received after achieving a certain number of points, a more relaxed decision. Showed (see Fig. 4). After reaching the necessary cluster size, more strict acceptance actions indicate that the player understood the payoff functions and adjusted the decision after reaching a useful cluster size. Also, this increase in your actions can sacrifice your position to help other small clusters to optimize your points, and as a result, other groups. It indicates that if you reach a larger cluster, you can get more points. However, due to the game restrictions and the sample size obtained, there are some situations that rarely appear during the game, which can be seen from the fluctuations due to the lack of cluster size 8 players in setup B (Fig. 4). According to the individual activities of the players in each setup, it was suggested that there was no significant difference between the entire setup, and that there was no difference in the behavior of the players in terms of overall activities (see Fig. 4). 。 However, as can be seen from Fig. 4, the subject's activities were adapted to the setup D.
- Since the subjects’ activities suggested that there were differences in decision-making between the purely human setup A and the hybrid setups B and C, we investigated the rationality of human players’ decisions. The average rationality of choices across possible cluster sizes is shown in Figure 5. Players’ strategies were individually heterogeneous, with some players adopting an average strategy with negative values for both acceptance and request (see Figure 5). This suggests that during a particular setup, those players sent requests to neighborhoods with a larger cluster size than the average for that particular neighborhood. Such decisions are not necessarily irrational, since players may have information not obtained from their neighborhoods, but from the network structure or from actions previously taken by the players. Autonomous agents are excluded from this analysis, since their decisions are homogeneous in our implementation. However, the in-game environment may introduce variability into our measures of strategies, since agents make decisions based on probability functions over possible options.
- In this study, we focused on human and agents experiments in order to insight coordination, cooperation, and complex dynamics in interaction with autonomous agents that imitate human and human behavior. Simulation using only autonomous agents gives a better or optimal performance, but c o-work that combines agents and humans, as it has been demonstrated in recent research 32, 33 using communication and video games. The result may be different in the decisio n-making setup. According to these studies, the model of the autonomous agent tends to decrease when combined with human decisio n-making, but unless the model itself has been specially trained to work well with the human subjects. 33 is shown that it is purely coordinated than a human group. However, the fundamental reason that the performance deteriorates when cooperating with the autonomous agent may be due to the fact that human players have not made decisions based on rules or unified decisions. 。 When comparing a human group and agent groups purely, these types of differences may be due to the highly operating ability of agents in the current environment. In this study, we conducted an experiment on cooperating groups in a hybrid group of humans and agents in order to evaluate the performance of these groups with various configurations and investigate the effects of unifying uniform payoff functions. These autonomous agents were modeled using the results of our previous research 10. < SPAN> In this study, we focused on human and agents experiments in order to insight, coordination, cooperation, and complex dynamics in interaction with autonomous agents that imitated human and human behavior. Simulation using only autonomous agents gives a better or optimal performance, but c o-work that combines agents and humans, as it has been demonstrated in recent research 32, 33 using communication and video games. The result may be different in the decisio n-making setup. According to these studies, the model of the autonomous agent tends to decrease when combined with human decisio n-making, but unless the model itself has been specially trained to work well with the human subjects. 33 is shown that it is purely coordinated than a human group. However, the fundamental reason that the performance deteriorates when cooperating with the autonomous agent may be due to the fact that human players have not made decisions based on rules or unified decisions. 。 When comparing a human group and agent groups purely, these types of differences may be due to the highly operating ability of agents in the current environment. In this study, we conducted an experiment of cooperative groups in a hybrid group of humans and agents to evaluate the performance of these groups with various configurations and investigate the effects of unifying uniform payoff functions. These autonomous agents were modeled using the results of our previous research 10. In this study, we focused on human and agents experiments in order to insight coordination, cooperation, and complex dynamics in interaction with autonomous agents that imitate human and human behavior. Simulation using only autonomous agents gives a better or optimal performance, but c o-work that combines agents and humans, as it has been demonstrated in recent research 32, 33 using communication and video games. The result may be different in the decisio n-making setup. According to these studies, the model of the autonomous agent tends to decrease when combined with human decisio n-making, but unless the model itself has been specially trained to work well with the human subjects. 33 is shown that it is purely coordinated than a human group. However, the fundamental reason that the performance deteriorates when cooperating with the autonomous agent may be due to the fact that human players have not made decisions based on rules or unified decisions. 。 When comparing a human group and agent groups purely, these types of differences may be due to the highly operating ability of agents in the current environment. In this study, we conducted an experiment on cooperating groups in a hybrid group of humans and agents in order to evaluate the performance of these groups with various configurations and investigate the effects of unifying uniform payoff functions. These autonomous agents were modeled using the results of our previous research 10.
- To understand the above aspects better, the experimental design was divided into three different setups, of which two experimental setups changed the configuration of autonomous agents. The contrast group of the agent was implemented only by autonomous agents using the same model as these two hybrid setups. The experimental design of this research is based on the experiments in accordance with the experiments that use network topology as a mesh with a periodic boundary, adding a small world link to make human players cognitive and moving easier. It is. However, since it is not necessary to form a maximum cluster to achieve the maximum payoff, the number of rounded rounds is reduced from 21 to 15, and each of the three color groups sends a place to exchange with neighbors. I gave the opportunity. The number of rounds was reduced because the agent was trained with the same poin t-based payoff function and the maximum payoffs were measured. The preliminary simulation of this possibility has reinforced the initial forecast that humans and agents would be more effective. The number of games was limited by the length of the experimental session. If the session is longer, human players may be tired and affect decision making. It should be noted that this is a restriction of game design, and it is impossible to make all possible conditions (that is, different nearby combinations) appear in the experimental session. < SPAN> In order to understand the above aspects better, the experimental design was divided into three different setups, of which two experimental setups changed the configuration of the autonomous agent. The contrast group of the agent was implemented only by autonomous agents using the same model as these two hybrid setups. The experimental design of this research is based on the experiments in accordance with the experiments that use network topology as a mesh with a periodic boundary, adding a small world link to make human players cognitive and moving easier. It is. However, since it is not necessary to form a maximum cluster to achieve the maximum payoff, the number of rounded rounds is reduced from 21 to 15, and each of the three color groups sends a place to exchange with neighbors. I gave the opportunity. The number of rounds was reduced because the agent was trained with the same poin t-based payoff function and the maximum payoffs were measured. The preliminary simulation of this possibility has reinforced the initial forecast that humans and agents would be more effective. The number of games was limited by the length of the experimental session. If the session is longer, human players may be tired and affect decision making. It should be noted that this is a restriction of game design, and it is impossible to make all possible conditions (that is, different nearby combinations) appear in the experimental session. To understand the above aspects better, the experimental design was divided into three different setups, of which two experimental setups changed the configuration of autonomous agents. The contrast group of the agent was implemented only by autonomous agents using the same model as these two hybrid setups. The experimental design of this research is based on the experiments in accordance with the experiments that use network topology as a mesh with a periodic boundary, adding a small world link to make human players cognitive and moving easier. It is. However, since it is not necessary to form a maximum cluster to achieve the maximum payoff, the number of rounded rounds is reduced from 21 to 15, and each of the three color groups sends a place to exchange with neighbors. I gave the opportunity. The number of rounds was reduced because the agent was trained with the same poin t-based payoff function and the maximum payoffs were measured. The preliminary simulation of this possibility has reinforced the initial forecast that humans and agents would be more effective. The number of games was limited by the length of the experimental session. If the session is longer, human players may be tired and affect decision making. It should be noted that this is a limit by game design, and it is impossible to make a sufficient number of possible states (that is, different nearby combinations) during the experimental session.
- First, in the case of setups for human-human, the final result of each game was the formation of the largest group like a collective payoff game, but human subjects understand the fixing individual payoff functions. It was shown to be there. This special payoff expression is not a declining gradient such as a collective payoff game with a cluster size, but the percentage of the request for acceptance for each cluster size follows the cluster size. It was clearly seen in the action. The indicator of the human player's strategy is that this is considered by consolidating the game status into the combination of three variables. In other words, it is a cluster size (S_I) of the focus player, the selected alter ego cluster size (S_J), and the average cluster size (S_J) of the adjacent player that shares the selected alter ego. The payoff function affected the player's strategy by making the cluster size exceeding the required threshold. This suggests that the player's strategy is divided into maximizing cluster (request) and helping other groups to form a cluster. Such an imbalance for such a purpose was also proved in the following human and agents. Agent, who modeled a fully cooperative setting player, has achieved the following good results. < SPAN> First, in the case of a human-human setup, the final result of each game was the maximum group, such as a collective payoff game, but human subjects have a modified individual payoff function. It was shown to understand. This special payoff expression is not a declining gradient such as a collective payoff game with a cluster size, but the percentage of the request for acceptance for each cluster size follows the cluster size. It was clearly seen in the action. The indicator of the human player's strategy is that this is considered by consolidating the game status into the combination of three variables. In other words, it is a cluster size (S_I) of the focus player, the selected alter ego cluster size (S_J), and the average cluster size (S_J) of the adjacent player that shares the selected alter ego. The payoff function affected the player's strategy by making the cluster size exceeding the required threshold. This suggests that the player's strategy is divided into maximizing cluster (request) and helping other groups to form a cluster. Such an imbalance for such a purpose was also proved in the following human and agents. Agent, who modeled a fully cooperative setting player, has achieved the following good results. First, in the case of setups for human-human, the final result of each game was the formation of the largest group like a collective payoff game, but human subjects understand the fixing individual payoff functions. It was shown to be there. This special payoff expression is not a declining gradient such as a collective payoff game with a cluster size, but the percentage of the request for acceptance for each cluster size follows the cluster size. It was clearly seen in the action. The indicator of the human player's strategy is that this is considered by consolidating the game status into the combination of three variables. In other words, it is a cluster size (S_I) of the focus player, the selected alter ego cluster size (S_J), and the average cluster size (S_J) of the adjacent player that shares the selected alter ego. The payoff function affected the player's strategy by making the cluster size exceeding the required threshold. This suggests that the player's strategy is divided into maximizing cluster (request) and helping other groups to form a cluster. Such an imbalance for such a purpose was also proved in the following human and agents. Agent, who modeled a fully cooperative setting player, has achieved the following good results.
- In summary, our game split payoff functions into three types, two of which are achievable from a player’s team-level performance (thresholds for the player’s own group size are small and large), and the third type depends on the performance of other teams. To achieve the third type of payoff, teams are required to cooperate with other teams, which also entails risk, since their own group size may fall below the threshold. Our previous experiment (incentivizing collective goals)10 demonstrated that if all teams cooperate, they can simultaneously achieve the maximum payoff. Therefore, we used an all-subjects setup to test whether a different payoff function that seems less cooperative would change the outcome of the game. We then focused on how the inclusion of agents changes the overall performance, and how the type of mixture (setup B vs. setup C) affects the outcome. First, we found that overall performance is best when teams are composed of only humans (setup A), and that performance may be worse when humans and agents are composed of homogeneous teams (setup B) as opposed to when humans and agents are composed of separate teams (setup C). From the plot of the average scores, we can see that setups A and C allow for a much faster increase.
- The human team seems to use a flexible and different strategy in various stages of games. Therefore, giving a lot of humans (like a setup C) in hybrid systems, whether purely cooperative and slightly cooperative, can help maximize the overall performance of the system. There is a possibility. Alternatively, a hybrid setup with a small number of interactions between agents and agents can expect better performance. However, if the payoffs are further distorted and the team has less benefits to helping each other, the agent's coordination may help improve the performance of the entire system.
- Kearns, M., Suri, S. & amp; MontFort, N. Experimental research on coloring problems in human network. Science 313, 824-827 (2006). ArticleAdscoogle Schoolar
- Jeanings, N. R., and others, humans and agents. Communications of the ACM 57, 80-88 (2014). ArticleGOOGLE SCHOLAR
- ISERN, D., Sánchez, D. & amp; Moreno, A. Agents Applied in Health Care: Agent to medical care: Review. International Journal of Medical Informatics 79, 145-166 (2010).
- Corthard, J. M., Baho, J., De Pass, Y. & Amp; Tapia, D. I. II. Intelligent environment for monitoring Alzheimer's patient, agent technology for health care. Decision Support Systems 44, 382-396 (2008).
- Van Doorn, J. et al. Domo Arigato Mr. Roboto: Appearance of automated social beings and customer service experience at the forefront of organizations. J. Service Res. 20, 43-58 (2017). Https://doi. org/10. 1177/1094670516679272ARTICLEGLEGLEGLE SCHOLAR.
Acknowledgements
Robu, V. et al. Multi-Unit Demand and applied to plug-in hybrid electric vehicles. MathscinetGoogle Schoolar
Author information
Authors and Affiliations
- Bonabeau, E. Agent Base Modeling: Techniques and techniques for simulating human systems. American Science Academy Kinori 99, 7280-7287 (2002). ArticleAdsCasGoogle School
- Carneman, d. Border rationality map: Psychology for behavioral economics. American Economic Review 93, 1449-1475 (2003).
- Can a robot become a teammate? A benchmark on a human and robot team. INTERACTION STUDIES 8, 483-500 (2007).
- BHATTACHARYA, K., TAKKO, T., Monsivais, D. & Amp; KASKI, K. Group formation in small world: Experiment and modeling. Journal of the Royal Society Interface 16, 20180814 (2019). ArticleGOOGLE SCHOLAR
Advantages and disadvantages of mobile phones
ISHOWO-OLOKO, f. et al. Behavioural Evidence for A Transparency TradeOff in Human-Machine Cooperation 1, 51 7-521 (2019). Paper Google Scholar
Locally noise autonomous agents will improve human global cooperation in network experiments. Nature 545, 370 (2017). ArticleAdscoogle Schoolar
Huang, K., Chen, YU, Z., C. & Amp; GUI, W. Heterogeneous Cooperative Belief for Social DilTi-Agent System. TICS AND COMPUTATION 320, 572-579 (2018). MathscinetMathgoogle Schoolarar
Huang, K., Wang, Z. & amp; JUSUP, M. Incorporated Latent Constraints To Enhance of Network Structure. ND English EERING 7, 466-475 (2018). ArticleMathscinetGoogle Schoolar
PERC, M., Szolnoki, A. & Amp; Szabó, G. Cyclical Interactions with Alliance-Specific Heterogeneous Invision Rates. ) Article ADSGOOGLE SCHOLAR
By participating in such games, you can participate in more effective games. In Proceedings of the 13th ACM Conference on Electronic Commerce, 690-704 (ACM, 2012). R.
Judd, S., Kearns, M. & amp; Vorobeychik, Y. Networked coloring and consensus dynamics and influence. American Science Academy Kinjin 107, 14978-14982 (2010). ArticleAdscoogle Schoolar
Introduction
Stability of university entrance examinations and marriage. The American Mathematicl Monthly 69, 9-15 (1962). ArticleMathscinetGinetGoogle Scholar
Laureti, p. & amp; Zhang, Y.-C. Matching game with partial information. Physics A: Statistical mechanics and its application 324, 49-65 (2003). ArticleMathsCinetGoogle School
Baronkeri, A., Gon, T., Pugrisi, A. & Amp; Lorate, V. Color naming patterns modeled on the appearance of universality. American Science Academy Kinriku 107, 2403-2407 (2010). ArticleAdsCasGoogle Scholar
Advantages of mobile phones
A crowdsourcing modeling as a collective problem solving. Scientific Reports 5, 16557 (2015). ArticleAdsCasGoogle School
Centola, D. & amp; Baronchelli, A. The Spontaneous Emergence of CONVENTINTIONS: experimental research on cultural evolution. Proceedings of the National Academy of Science 112, 1989-1994 (2015). ArticleAdscasasGOOGLE SCHOLAR
Easy Communication
Chauhan, A., Lenzner, P. & amp; Molitor, L. Schelling Segregation with Strategic Agents. In Deng, X. (ed.) Algorithmic Game Theory, 137-149 (Springer International Publing, Cham, 2018). https://doi. org/10. 1007/978-3-319-99660-8_13ArticleMathoogle Scholar
Elkind, E., Gan, J., Igarashi, A., Suksompong, W. & amp; Voudouris, A. A. Schelling Games on Graphs. Arxiv PrePrint Arxiv: 1902. 07937 (2019). Articlegoogle Scholar
Education
Schelling, T. S. Dynamic models of segregation. Journal of mathematical sociology 1, 143-186 (1971). Articlegoogle Scholar
Social media
Dreze, J. H. & amp; Greenberg, J. Hedonic coalitions: Optimality and stability. Econometrica (Pre-1986) 48, 987 (1980). https://doi. org/10. 2307/1912943ArticleMathscinetMathoogle Scholar
Aziz, H. & amp; Savani, R. Hedonic Games. In Brandt, F. & AMP; PocAcccia, A. D. (Eds.) Handbook of Computation Social Choice, Chap. 15, 356-377 (Cambridge University Press, New York, 2016). Chaptering Scholar
Promoting business
Barrett, S. et al. Why Cooperate?: The Incentive to Supply Global Public Goods (Oxford University Press on Demand, 2007). Bookoogle Scholar
Takko, T. Study of modeling human behavior in cooperative games. Master's dissertation, University of Aalto (2019). https://aaltodoc. aalto. fi/handle/123456789/39939GOOGLE SCHOLAR
Good for people's safety
チェン 、 d. l. 、 ション ガー 、 m. & Amp; ウィッケンズ 、 c. オツリー - 実験 室 、 、 フィールド 実験 の の オープン プラットフォーム 。journal of behavioral and expermental finance 9, 88-97 (2016). 論文 Google Scholar
Helpful in emergency situations
Crandall, J. W. et al. Coopering with Machines. Nature Communications 9, 233 (2018). Articleadsgoogle Scholar
Earn money via mobile
Hu, H., Lerer, A., Peysakhovich, A. & amp; Foerster, J. 「Other-Play」 for Zero-Shot Coordination. Arxiv Preprint Arxiv: 2003. 02979 (2020). グーグル ・ スカラー
Accessing the internet through mobile phones
Carroll, M. et al. 人 間-Ai 協調 における 人間 学習 有用 性 について について. Advances in Neural Information Processing Systems 5174-5185, (2019). グーグル ・ スカラー
Mobile phones can be used for Entertainment
BERNER, C. et al. DOTA 2 WITH LARGE SCALE DEEP ReinForcement Learning. Arxiv Preprint Arxiv. 1912. 06680 (2019).
Camera in Mobile phones
すべての著者は、EU HORIZON2020 FET Open RIA プロジェクト(IBSEN)No. 662725、SoBigData からの支援を謝意する: Social Mining and Big Data Ecosystem, Grant agreement No: ソーシャルマイニングとビッグデータ分析のための欧州統合インフラ、 助成 契約 ID : 871042, (http://www. sobigdata. eu)。
アールト 大学 理学 部 コンピュータ 学科 、 、 00076 、 エスポー 、 トゥオマス ・ タッコ 、 クナル ・ バッタ 、 ダニエル ・ モンシヴァイス & キンモ ・ カスキ カスキ カスキ カスキ
GPS location
Finland, Espo, 00076 University of Ulto University, Faculty of Business Administration Kunal Batachariya
Flashlight
Alan Tuuling Research Institute, 96 Euston RD, Kings Cross, London, NW1 2db, UK Kinmo Kaski (Kimmo Kaski
Alarms and reminders
Tuomas Tacco
Calculator
Mobile phones (also called mobile celluler networks, mobile phones, and handphones) are electrical equipment used for total dual dual wireless communication via cellular networks called cell sites.
Contacts
This article explains the advantages and disadvantages of using a mobile phone. Before starting this topic, I would like to ask some questions.
Online banking
Do you use smartphones for a long time?
Do you feel frustrated without a mobile phone, or you can't live for a second without a mobile phone?
Disadvantages of mobile phones
Do you have a habit of using the Internet on your mobile phone for 6 to 10 hours every day?
Distraction
If your answer is "yes", this article will be very useful for you and will provide complete information on the advantages and limits of using a mobile phone.
Ear problems
I hope this article is useful to get information about mobile phones. This article is also useful for students to write and give presentations on the advantages and weaknesses of using mobile phones. Let's get into the main subject right away.
Wastage of time
In this era, few people do not have a mobile phone or not use. Now, mobile phones are the priority of all people's lives, and everyone uses mobile phones in their daily lives for communication, business, and other activities. In today's world, mobile phones have completely changed their personal life.
Addiction to mobile phone
Everything has a positive and bad effect. The same is true for mobile phones, and it is a great invention, but there are also bad aspects. From a right perspective, mobile phones are astounding invention for humans, but some problems can occur by continuing to use for a long time.
Cyberbullying
Let's explain in detail the advantages of mobile phones.
Security issues
The mobile phone has changed the way of communication. Before the mobile phone appeared, I used a landline or letter to contact people who live in the distance.
Loss of study
Mobile phones are not limited to communication, but also have various other advantages and restrictions. Most people know the benefits of mobile phones. The advantages of mobile phones are as follows.
Health problems
The main advantage of using mobile phones is to make communication means easier and cheaper. Because of its low price, mobile phones are reasonable, revolutionized in the telecommunications industry, and about 95 % of people use mobile phones for communication.
Sleeping issues
Pressing some keys on your mobile phone allows your friends, family, and colleagues at any time, making your mobile phone easier. In mobile phones, you can contact your friends in various ways, such as voice calls, video calls, text messages, and recorded calls.
Accidents
This is also a major advantage of mobile phones. Mobile phones can be used to get knowledge and information about various topics. Recently, most universities, educational institutions, and school offer online education that provides appropriate learning materials in forms such as images, photos, text, and PDFs. Corona's pandemic has confirmed that students are taking online classes provided by each educational institution to ensure student safety and health.
Distance from relatives
In this era, mobile phones are not used only for calls. Smartphones are said to be gifts for social media enthusiasts. Social media apps such as Twitter, Instagram, Snap Chat, Facebook are always on our fingertips. You can edit and share photos and posts on social media directly from your mobile phone. With mobile, you can always access social media.
Conclusion
Most people scroll the social media timeline on a mobile phone in spare time.
Mobile phones can also be used for business advertising. A mobile phone is perfect for entrepreneurs and businessmen to promote business through online or offline sources. Online can use social media websites and messaging applications such as telegrams, Instagram, Whatsapp, and Pinterest. Most of the major companies arrange meetings with video messaging applications like Skype.
One of the old offline methods to promote business using a mobile phone is text messages. Even in this era, most companies are promoting business with text messages. At the end of every message, links to product pages and business websites are affixed.
