Accesso libero

Application and Ethical Review of Artificial Intelligence Technology in News Writing and Editing and Distribution

 e   
19 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

Traditional television media has always occupied an important position in information dissemination after decades of development. According to statistical data, TV media occupies about half of the information dissemination, which is one of the main ways for people to receive information [1-2]. At this stage, with the development of artificial intelligence technology, the market share of traditional TV media is decreasing, and TV news gathering faces great challenges. In this context, in order to gain a place in the fierce market environment, TV news editorial needs to constantly innovate and incorporate AI technology into it, and utilize AI technology to constantly broaden the news editorial channels in order to improve its comprehensive strength [3-5].

As an important part of the development of digital information technology, the practice of news gathering and editing has been influenced by the iteration of technology and the diffusion of media ecology [6-7]. In news interviewing, multimedia technology, big data and artificial intelligence technology are utilized for data mining and public opinion analysis, innovative interviewing methods, and focus on depth and originality [8-11]. In terms of news writing, adapting to the writing style in the AI era, innovating writing techniques, focusing on storytelling and emotional resonance, and constructing an all-media reporting system to achieve all-around coverage of news reporting [12-15]. These innovative ways help to increase the authenticity and credibility of news reports, improve the interest and participation of the audience, better meet the needs of the audience and serve the society [16-18]. These innovative approaches help increase the authenticity and credibility of news reports, better meet the needs of audiences and serve the society [19-20]. Therefore, journalists should continue to learn and master new technologies and methods to adapt to the development trend of the AI era and provide better and more comprehensive news reports [21-23].

This study first describes the reconstructive analysis and ethical reflection of journalism by AI technology and the multiple ethical risks in the age of AI. Then it proceeds to analyze the application of AI in journalism from three aspects. In terms of users’ attitudes and perceptions towards algorithmic news users, this study surveyed 836 users using applications such as Today’s Headlines, Tencent News, and One Point Information in 20 districts of Guangdong Province through questionnaires, and researched and analyzed the factors affecting users’ perceptions and attitudes. In terms of algorithmic reconstruction, an algorithmic audit of an algorithmic short video platform was conducted to measure and compare the news value of news videos in different contexts. In terms of communication bias analysis, the influence of 12 variables on the communication effect of values at the level of environmental cognition is studied through correlation analysis. Finally, according to the reconfiguration and ethical reflection of journalism by AI technology, corresponding coping strategies are proposed.

Ethical scrutiny of artificial intelligence in newsgathering and editorializing

Today’s expanding use of artificial intelligence technologies to improve the efficiency and accuracy of news creation also poses many implications and challenges for journalistic ethics. The introduction of ethical norms in journalism often requires complex development processes that conflict with the logic of continuous, high-speed and disruptive information innovation, making for a cyclical difference between regulatory regimes and technological development. Journalism itself is responsible for reshaping the media ecology, creating user landscapes, and transmitting information culture, and AI journalism is no exception. Therefore, it is necessary to view AI technology through the lens of journalistic ethics.

Reconfiguring and Rethinking the Ethics of Journalism with Artificial Intelligence Technologies
Analysis of the Reconfiguration of Journalism by Artificial Intelligence Technology

Revolutionizing the traditional production process

The production process of artificial intelligence news is roughly divided into four major stages: database construction, computer learning, news writing and manual review. Artificial intelligence technology learns from past news articles, processes the data in them, and gradually structures them to form structured data that can be used in artificial intelligence technology.

Redefining the identity of “gatekeepers”

As a key staff member responsible for news propaganda, the “gatekeeper” must evaluate and adjust the values, writing quality, media policy, audience requirements, and other requirements of the report, so that only the information that meets the standard requirements can be disseminated. Artificial intelligence systems are responsible for filtering and managing information.

Reinventing the media-audience relationship

Audiences are fed quickly with information of interest by conceding their privacy rights, gradually showing the differentiation of the information received by individuals for events. While the relationship between the media and the audience has become closer, the media has become more aware of the audience’s preferences, and the discriminatory attitudes towards positions and information and relatively homogenized content naturally embedded in artificial intelligence are brought to the receiving end along with the pushed content.

Ethical reflections generated behind the application of artificial intelligence technologies

The subjectivity of news “human” decreases

The subjectivity of human beings in artificial intelligence news has declined, and the phenomenon of templatization of news content has become more and more serious. The content is mostly repetitive content with similar frames, mechanical and lacking in “human” subjectivity. Artificial intelligence users often lack media literacy. While journalists should improve their ability to use technology and update their concepts of journalism ethics, they should also take into account the principles of social value in the development of AI technology.

News authenticity is challenged

The AI black box masks the authenticity of news, and AI that spreads certain ideologies and ideas may also produce falsified, embedded biased and discriminatory news reports. It is necessary to focus on the top-level design to form a virtuous cycle, establish a scientific and reasonable robot writing “error-producing and error-correcting” system, and then improve the basic technology of AI news reporting.

News value is weakened

The processing of news data by intelligent AI recommendation technology causes decision-making errors. The quality of the news content is reduced, so that the value of the report serving the society declines. It is necessary to pay attention to the feedback and participation of AI news users, incorporate user feedback and suggestions into the production of AI news, and pay full attention to the “bottom-up” governance of AI ethics by users.

Multiple Ethical Risks in the Age of Artificial Intelligence

Content Risk: False Dissemination under Misrepresentation

Doubts about the authenticity of content presented based on AIGC technology have always plagued the media and consumers. When this technology is used for news reporting, it may turn into a major source and promotional tool for false news, which will not only have a negative impact on society, but also reduce the professionalism and credibility of news media.2023 In early 2023, NewGuard, a well-known U.S.-based news credibility rating organization, did a test of ChatGPT and found that more than 80% of the information responses were leading or incorrect, containing a large number of inaccurate statements, rumors and harmful information. These cases show that if technologies such as ChatGPT are applied to news reporting, it is difficult to ensure the truthfulness of the news, and may even be used by malicious operators to spread false information, which will cause great harm to the news media, public opinion and social development.

Technological Risks: Multiple Harms under Privacy Leakage

Over-reliance on technology may generate the risk of privacy leakage. At the current stage, generative AI does not yet have the ability to reason in the news field, and over-reliance on news information generated by AIGC may lead to the loss of people’s ability to think for themselves and value judgment, and reduce them to the appendages of technology. For news consumers, when AI-generated technology based on the core principles of AIGC is used in large numbers in news content production, it may bias consumers’ thinking and reading patterns toward personalization, focusing only on their favorite news categories and missing the opportunity to learn about other categories of news information. The privacy leakage brought by technological risks also continues to erode the news industry, which is an invisible disaster for both news producers and news consumers.

Cognitive Risk: Lack of Awareness under the Ethics of Rebellion

The rebellion of artificial intelligence technology brings ethical risks. Science and technology is a double-edged sword, with the continuous development of artificial intelligence, a series of ethical issues have emerged. At present, there is still the problem of the responsibility gap of AI application in China, and the relevant laws and regulations have not yet been issued, and the corresponding rules and regulations have not yet been established. Accountability is an important issue, and the key is whose responsibility to pursue and by whom. Liu Yongmou, an associate professor at the School of Philosophy of Renmin University of China, believes that in the era of high technology, technological rebellion may occur, and we need to treat it with caution. With the combination of artificial intelligence and neuroscience, robots are becoming more and more “intelligent”, and artificial intelligence has surpassed human beings in many aspects. However, technological rebellion is always present, and there are potential threats to human beings themselves, which are ethical issues in reality.

Results of the application of artificial intelligence technology in journalism
A study of users’ perceptions and attitudes toward algorithmic news

In this paper, we will explore users’ attitudes and perceptions towards algorithmic news users, through a questionnaire survey of 836 users using applications such as Today’s Headlines, Tencent News, and One Point News in 20 areas of Guangdong Province.

Descriptive findings

From September 15 to 25, 2024, questionnaires were distributed to 20 districts in Guangdong Province, and 836 valid questionnaires were received (56% female and 48% male). The basic profile of the respondents is shown in Table 1. Among the respondents, 3.17% were under 18 years old, 67.28% were between 18 and 30 years old, 22.58% were between 31 and 50 years old, and 6.97% were over 50 years old; 69.75% resided in urban areas, and 30.25% resided in rural areas. In terms of educational level, 59.41% have less than a bachelor’s degree and 40.59% have a bachelor’s degree or higher.

Basic information of interviewees

Age structure
Under 18 3.17%
18-30 67.28%
31-50 22.58%
Over 50 6.97%
Nature of household registration
Urban area 69.75%
Rural area 30.25%
Educational background
Bachelor degree less 59.41%
Bachelor degree or above 40.59%
Total time of use
More than 3 years 33.57%
1 to 2 years 44.02%
Less than half a year 22.41%

The study mainly investigated audience news receiving habits from three aspects, including total time of use, frequency of use, and time of use. In terms of total usage time, 33.57% of participants used algorithm-recommended information platforms to receive news for more than three years, 44.02% used them for one to two years, and only a few participants used them for less than half a year; in terms of usage frequency, 78.62% of participants used them every day, with 42.36% using them more than three times a day; and in terms of usage time, 67.40% of participants spent less than half an hour each time, 8.31% spent more than an hour each time, and 54.36% spent more than an hour each time. 67.40% of the participants spend less than half an hour each time, 8.31% spend more than an hour each time, 54.38% of the users do not use these apps at a specific time, and 36.72% of the users focus on using these apps after 5pm. It can be seen that in terms of participants’ use of information apps based on algorithmic recommendations, which are characterized by high frequency, short time and randomness, algorithmic news not only leads to fragmented reading habits, but also fragments users’ time.

Influence of educational background on users’ cognitive attitudes

The study categorized the education level of the respondents into elementary, junior high, high school, college, graduate, and doctoral and above education users. The analysis of variance (ANOVA) of perception and attitude towards algorithmic news by education level is shown in Table 2.

Variance analysis of different educations on algorithmic news cognition

Group Sample Knowledge of algorithms Algorithm privacy Headline party Information cocoon News value Plagiarism and infringement Reinforce inequality
M σ M σ M σ M σ M σ M σ M σ
Primary school education or below 3 2.13 0.97 2.20 1.74 2.06 1.55 2.21 0.88 2.86 1.84 2.84 1.51 2.54 1.00
Junior high school 32 2.97 1.05 3.74 0.76 2.98 1.20 3.17 0.67 3.31 0.64 3.53 1.02 3.15 0.81
Senior high school 69 3.35 0.94 3.62 0.71 3.41 0.87 3.24 0.54 3.51 0.77 3.59 0.80 3.22 0.73
Junior college 213 3.54 0.87 3.70 0.65 3.55 0.76 3.27 0.46 3.47 0.85 3.66 0.73 3.38 0.77
College 362 3.88 0.74 3.88 0.69 3.61 0.70 3.25 0.47 3.58 0.72 3.70 0.78 3.52 0.75
Master 137 4.04 0.67 3.91 0.63 3.75 0.73 3.31 0.53 3.67 0.84 3.68 0.62 3.47 0.81
Learned scholar 20 4.15 0.61 3.96 0.58 3.96 0.65 3.26 0.38 3.76 0.89 4.04 0.70 3.38 0.74
ANOVA F 10.34 4.02 4.94 5.31 5.02 6.21 1.53
Sig. 0.001 0.001 0.000 0.000 0.001 0.000 0.751

According to the ANOVA, users with different education levels have different views on the same issue, especially between users with elementary school background and users with other education backgrounds, and the sig value is less than 0.05, which indicates that there are significant differences between users with different education backgrounds on the same viewpoints, which means that education background affects users’ attitudes and perceptions towards algorithmic news. Among the seven points of view, only the last point of view of “reinforcing inequality” has no significant difference in the views of different groups, while for the other six points of view, there is a significant difference in the users’ views. It can be seen that the education factor has a significant impact on users’ attitudes towards algorithmic news.

Algorithmic reconfiguration effects
Experimental design

In this study, a three-group ABT experimental design was conducted, with a total of 90 rented cloud phones representing 90 virtual users. Experimental groups 1 to 3 (referred to as “group 1”, “group 2” and “group 3”) were each assigned 30 cell phones, while the data of the control group was generated by random sampling without the participation of cloud phones. The IP addresses of all cloud phones are set in Beijing, and the hardware configuration and parameter settings are guaranteed to be the same. The experiment was conducted from September 9 to September 16, 2023, with each cloud phone running for 5 minutes every hour between 9:00 and 21:00 every day for a total of 7 days.

Group 1 simulates the situation that the algorithm system pushes news videos for users in a similar “cold-start” situation when users do not reveal their news preferences to the platform; Group 2 and Group 3 simulate the situation that users reveal their news preferences to the platform to varying degrees; and Group 4 and Group 5 simulate the situation that users reveal their news preferences to the platform to varying degrees. Group 2 and Group 3 simulate the situation that the algorithm system curates news according to personalized needs when users “reveal” their news preferences to the platform in different degrees; the control group randomly selects news videos accessible to users on the platform and measures the news value orientation of these contents to reflect the news value preferences of the industry itself.

Descriptive findings

The distribution of time difference kernel density is shown in Fig. 1, when the user’s news preference is not clear (Group I), the average time lag between the release of the news video and being tweeted is the longest (M = -76.52, CV = -3.48, Median = -13.97). And when users showed a preference for news (Group II), the mean time lag was shortened (M=-43.73, CV=-3.82, Median=-9.21).

Figure 1.

Time difference kernel density distribution map

In terms of novelty, the cosine distance kernel density distribution is shown in Figure 2, where the average cosine distance between the pushed videos is the largest and highly concentrated around 1 when the user does not specify his or her interest (M = 0.987, CV = 0.006, Median = 0.995). This scenario has the lowest degree of similarity between the pushed news videos, which can be considered as the most novel; once the virtual users show different degrees of news preference (i.e., Groups II and III), the mean cosine distance between the pushed videos changes in the shape of the distribution, and its mean starts to shrink, with the degree of fluctuation becoming larger and the tendency to become highly centralized being weakened (Group II: M=0.980, CV=0.028, Median=0.993; Group III: M=0.978, CV=0.035, Median=0.993). However, the shortest mean cosine distance between pushed videos appeared in the control group (M=0.971, CV=0.052, Median=0.991), and as can be seen in Figure 2, the highest proportion of randomly selected videos from news accounts had the highest proportion of similar topics.

Figure 2.

Cosine distance kernel density distribution

Analysis of communication bias
Three-level thematic division of environmental cognition

Through the rooted theory analysis of the interview data, it was concluded that there were three secondary themes under the primary theme of “environmental awareness” and 12 tertiary themes under the secondary theme, as summarized in Table 3.

Classification of subject categories at the environmental cognition level

Level 1 Theme Level 2 Theme Level 3 Theme
Environmental cognition Group impression X1 Circle information
X2 Portrait deviation
X3 Group belonging
X4 Group difference
Stereotype X5 Stereotype threat
X6 Viewpoint polarization
X7 Acoustovisual stimulus
X8 Impression traction
X9 Emotional infection
Social relation X10 Anxiety
X11 Situational attribution
X12 Self-worth identification

Group impression

Group impression is the impression of specific groups, such as self-attributed groups, typical groups in society, and groups of various classes, formed in the process of exposure to algorithmic news. Secondary themes include the social stratification felt through exposure to the content of algorithmic news, cognitive bias caused by comparing algorithmic news with the real scene of society, spiritual belonging to a certain group generated by combining the real experience with the presentation of the group in the algorithmic news, and psychological dissimilarity due to the discrepancy between the group media picture of the algorithmic news and the individual’s existing cognition.

Stereotype

Stereotypes are relatively fixed, single evaluations and views of specific groups, social phenomena, etc. formed in the course of exposure to algorithmic news. The second-level topics include negative cognition of a certain type of thing caused by exposure to information with serious tendencies through algorithmic news, as well as the group pressure brought about by it, very biased views, attitudes and emotions towards certain types of things, easy to mentally integrate into the social situation of media fiction due to the strong audio-visual stimulation created by algorithmic news, and adjust one’s attitude towards objective things and emotions towards others by capturing the emotions of others.

Social relations

Social relations refer to individuals’ perceptions of their relationships with others and society. Secondary themes include apprehension and worry about one’s own life situation and personal abilities through comparison with others and other groups, attribution of situations to oneself or the external environment in virtual socialization, and identification (or denial) of self-worth when one’s self-interests and needs are fulfilled (or not fulfilled), which also includes adjustments to one’s self-worth expectations.

Variables Related to the Measurement of Values at the Cognitive Level of the Environment

The 12 tertiary theme measures of the environmental cognitive level and the values measures were correlated, and the Pearson correlation coefficient was used to detect the degree of correlation between the secondary themes and the values of the social cognitive level, and the results are shown in Table 4.

Correlation analysis between environmental cognition and value measurement

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12
Value P .188** .036 .065 -.047 -.092 .061 -.007 .006 .076 -.035 -.078 .165
Sig. .001 .685 .021 .088 .002 .069 .684 .852 .041 .473 .008 .000
N 1260

Indicates significant correlation at the 0.05 level (two-sided).

Indicates significant correlation at the 0.01 level (bilateral).

As can be seen from Table 4, circle information is significantly and positively correlated with environmental cognitive values at the 0.01 level (P = 0.000 < 0.01), group belonging is significantly and positively correlated with environmental cognitive values at the 0.05 level (P = 0.019 < 0.05), emotional contagion is significantly and positively correlated with environmental cognitive values at the 0.05 level (P = 0.041 < 0.05), and self value identity is significantly positively correlated with environmental cognitive values at the 0.01 level (P = 0.000 < 0.01), indicating that with the increase of the four index measures of circle information, group belonging, emotional infection, and self-worth identity, the more the individual’s values at the environmental cognitive level converge to the socialist core values.

As can be seen from Table 4, stereotype threat is significantly negatively correlated with environmental cognitive values at the 0.01 level (P = 0.002 < 0.01), and situational attribution is significantly negatively correlated with environmental cognitive values at the 0.01 level (P = 0.008 < 0.01), which suggests that with the increase in the metrics of the two indexes of stereotype threat and situational attribution, the more detrimental it is to the individual’s environmental cognitive level values toward goodness of development.

The deeper the individual’s perception of stereotype threat, the more unfavorable it is to the identification of socialist core values. This finding is consistent with the research hypothesis. Relevant studies have shown that stereotype threat leads individuals to behave differently than usual, as well as psychological betrayal and lack of identity. Major algorithmic news platforms, which use traffic, hot words, and interests as the main algorithmic basis, are especially prone to amplify certain types of topics and produce psychological oppression on individuals.

Regression modeling of values measures at the level of environmental perceptions

The above clarifies that under the premise of the application of artificial intelligence technology in news, circle information, group belonging, emotional infection, and self-worth identity are positively correlated with values at the level of environmental cognition, while stereotype threat and situational attribution are negatively correlated with values at the level of environmental cognition. The six secondary themes of circle information, group belonging, emotional infection, self-worth identity, stereotype threat, and situational attribution were analyzed by multiple linear regression with the values of environmental cognitive level as the independent variable and the values of environmental cognitive level as the dependent variable, and the results are shown in Table 5. As can be seen from the Sig. values in the table, the differences in circle information and self-worth identity are significant (P < 0.01), the differences in stereotype threat, emotional infection, and situational attribution are significant (P < 0.05), and there is no significant difference in group affiliation (P > 0.05).

Coefficient of 1st regression of values at the environmental cognitive level

Model Nonnormalized coefficient Standard coefficient t Sig.
B Error
Constant 27.583 1.276 20.583 .000
X1 .556 .261 .264 4.578 .001
X3 .216 .090 .036 1.536 .177
X5 -.367 .088 -.081 -3.264 .019
X9 .106 .086 .046 2.076 .045
X11 -.238 .084 -.053 -1.983 .037
X12 .569 .091 .211 5.681 .000

In order to make the regression equation more accurate, the third-level theme of “group belonging” was deleted, and five second-level themes of circle information, emotional infection, self-worth identity, stereotype threat, and situational attribution were used as the independent variables, and the values at the level of environmental cognition were used as the dependent variables to conduct the multiple linear regression analysis again, and the results are shown in Table 6.

Coefficient of 2nd regression of values at the environmental cognitive level

Model Nonnormalized coefficient Standard coefficient t Sig.
B Error
Constant 29.619 1.138 21.570 .000
X1 .531 .135 .152 4.135 .000
X5 -.291 .095 -.063 -2.742 .011
X9 .172 .093 .055 2.311 .040
X11 -.185 .095 -.057 -2.307 .029
X12 .535 .098 .163 5.455 .000

From the Sig. values in the table, it can be seen that the differences of circle information and self-worth identity are significant (P < 0.01), and the differences of stereotype threat, emotional infection, and situational attribution are significant (P < 0.05). Setting the values of environmental cognitive level as Y2, circle information as X1, stereotype threat as X5, emotional infection as X9, situational attribution as X11, and self-worth identity as X12, and using unstandardized coefficients, the linear regression equation is: Y2=26.619+ 0.531X1-0.291X5+0.172X9-0.185X11+0.535X12

From this equation, it can be seen that the positive and negative values of the coefficients in front of the respective variables are consistent with the results in the correlation analysis. According to the setup of the scale, the value interval of X1, X5, X9, X11, X12 is [2, 10], and the value interval of Y2 is [10, 50], and the equation can be used to describe the relationship between the values of environmental cognitive dimensions and these five independent variables, but it is only suitable for reference due to the different rules of assigning the values of specific algorithmic procedures.

Strategies to overcome the impact of artificial intelligence technologies on journalism ethics

According to the research on the application of AI technology in news mentioned above, based on the analysis of user attitudes, algorithmic reconstruction effects and communication bias, this paper proposes the following strategies on how to overcome the impact of AI technology on news ethics.

Enhance social responsibility and infiltrate mainstream values in AI recommendation.

In the development of AI recommendation technology, correct mainstream values should be integrated into it on the basis of ensuring feasibility. First, efforts can be made on keywords. For the keywords in the news production process, more positive and healthy information content is distributed. By optimizing the screening mechanism of keywords, the quality and value of news can be improved.

Secondly, make the AI recommendation more diversified. Artificial intelligence recommendation should not be based only on the user’s interests and hobbies as a guideline, but should push more diversified information content for the user to create a comprehensive and diversified information space.

Enhance the sense of gatekeeping in the design of AI and improve the transparency of AI.

The underlying logic of AI must be constructed with the correct values in mind, abiding by the moral and value guidelines of society. The technical threshold of AI design is high, especially its professionalism and complexity make it impossible for the public to transparently supervise its operation mechanism. Therefore, a complete supervision system should be established for it. In some issues involving the public’s key interest information, privacy information, etc., the purpose and process of AI should be publicized to accept public opinion supervision by the public without involving commercial secrets.

Improve relevant laws and regulations, and severely punish offending creative behaviors.

From the point of view of internal regulations and constraints, the developers of AI recommendation technology should formulate corresponding norms and guidelines to ensure that AI can follow the principles of fairness, impartiality, transparency, and accountability when recommending content. These norms and guidelines should clarify the specific processes, standards and methods of AI recommendation.

In terms of external legal constraints, the government should strengthen its supervision of AI recommendations and formulate stricter laws and regulations to standardize the use of AI recommendation technology. Laws and regulations should clearly define and categorize violations of AI recommendation and provide for corresponding penalties.

Conclusion

In this study, the following conclusions were obtained by studying and analyzing the three aspects of users’ cognitive and attitudinal research on algorithmic news, the effect of algorithmic reconstruction, and news dissemination bias:

Participants’ aspects of the use of information applications based on algorithmic recommendations were characterized by high frequency, short time and randomness, indicating that algorithmic news not only leads to fragmented reading habits, but also fragments users’ time. There is a significant difference between users with elementary school background and users with other educational backgrounds. The sig value is less than 0.05, indicating that there is a significant difference between users with different educational backgrounds on the same viewpoints, suggesting that educational background affects users’ attitudes and perceptions of algorithmic news.

The average time lag between the release of a news video and getting a push is longest when the user’s news preference is unclear. And when users showed their news preference, the average time lag shortened. In terms of novelty, when users did not specify their interests, the pushed news videos had the lowest similarity between them and the strongest novelty. Once the virtual users show different degrees of news preference, the average cosine distance between the pushed videos changes in the distribution pattern, and its mean starts to shrink, and the tendency of fluctuation becomes larger and highly concentrated is weakened.

Under the premise of artificial intelligence technology in news application, circle information, group affiliation, emotional infection, and self-worth identification are positively correlated with values at the level of environmental cognition, while stereotype threat and situational attribution are negatively correlated with values at the level of environmental cognition.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro