Objective: The purpose of this study was to develop and validate a predictive model of cognitive impairment in older adults based on a novel machine learning (ML) algorithm.
Methods: The complete data of 2,226 participants aged 60–80 years were extracted from the 2011–2014 National Health and Nutrition Examination Survey database. Cognitive abilities were assessed using a composite cognitive functioning score (Z-score) calculated using a correlation test among the Consortium to Establish a Registry for Alzheimer's Disease Word Learning and Delayed Recall tests, Animal Fluency Test, and the Digit Symbol Substitution Test. Thirteen demographic characteristics and risk factors associated with cognitive impairment were considered: age, sex, race, body mass index (BMI), drink, smoke, direct HDL-cholesterol level, stroke history, dietary inflammatory index (DII), glycated hemoglobin (HbA1c), Patient Health Questionnaire-9 (PHQ-9) score, sleep duration, and albumin level. Feature selection is performed using the Boruta algorithm. Model building is performed using ten-fold cross-validation, machine learning (ML) algorithms such as generalized linear model (GLM), random forest (RF), support vector machine (SVM), artificial neural network (ANN), and stochastic gradient boosting (SGB). The performance of these models was evaluated in terms of discriminatory power and clinical application.
Results: The study ultimately included 2,226 older adults for analysis, of whom 384 (17.25%) had cognitive impairment. After random assignment, 1,559 and 667 older adults were included in the training and test sets, respectively. A total of 10 variables such as age, race, BMI, direct HDL-cholesterol level, stroke history, DII, HbA1c, PHQ-9 score, sleep duration, and albumin level were selected to construct the model. GLM, RF, SVM, ANN, and SGB were established to obtain the area under the working characteristic curve of the test set subjects 0.779, 0.754, 0.726, 0.776, and 0.754. Among all models, the GLM model had the best predictive performance in terms of discriminatory power and clinical application.
Conclusions: ML models can be a reliable tool to predict the occurrence of cognitive impairment in older adults. This study used machine learning methods to develop and validate a well performing risk prediction model for the development of cognitive impairment in the elderly.
To understand students’ learning behaviors, this study uses machine learning technologies to analyze the data of interactive learning environments, and then predicts students’ learning outcomes. This study adopted a variety of machine learning classification methods, quizzes, and programming system logs, found that students’ learning characteristics were correlated with their learning performance when they encountered similar programming practice. In this study, we used random forest (RF), support vector machine (SVM), logistic regression (LR), and neural network (NN) algorithms to predict whether students would submit on time for the course. Among them, the NN algorithm showed the best prediction results. Education-related data can be predicted by machine learning techniques, and different machine learning models with different hyperparameters can be used to obtain better results.
The tragedy of the commons refers to the overuse of resources which are rival in consumption but lack excludability and it also refers to rent dissipation. While the tragedy of the anticommons is a tragedy closely connected with underuse of resources that are rival in consumption and with too strong excludability. The prior studies proved that the tragedy of the commons and the tragedy of the anticommons are symmetric from the perspective of pure mathematics, especially the game theory, which was later refuted by behavioral economics experiments. According to them, the tragedy of the anticommons is severer than the tragedy of the commons. The asymmetry of the tragedy of the commons and the tragedy of the anticommons is a paradox by these different research methods. This paradox shows that there are imperfections in the completely rational economic man hypothesis set up by neoclassical economics. As a fundamental theory, the tragedy of the commons is quite influential in many disciplines, such as microeconomics, public sector economics, ecological economics, environmental economics, management, sociology, property law, and political science. And the tragedy of the anticommons theory has also opened its door of both theoretical research and practical implications since its acceptance by Nobel laureate Buchanan, the main founder of public choice school. Only when theoretical issues are thoroughly discussed and made clear enough, can people avoid misunderstanding or misusing the commons theory. Thus, it is necessary to elucidate the paradox between them. Based on Simon’s bounded rationality, Kahneman and Tversky’s prospect theory, value function, Thaler’s mental accounting, endowment effect, and other cognitive psychological tools, this study clearly shows that agents’ decision-making process is not just based on the long-believed marginal benefit and marginal cost analysis advocated by traditional neoclassical economists. Agents’ decision-making is a process in which agents selectively absorb, code the objective marginal revenue and marginal cost, and feed relevant information to their brain. Therefore, what plays a directly decisive role is not the objective marginal revenue and marginal cost per se, but the mentally perceived subjective utility of marginal revenue and marginal cost by the human brain. Followed by this research clue, the paradox between the tragedy of the commons and the tragedy of the anticommons is elucidated from the perspective of cognitive psychology.
Psycholinguistics and neurolinguistics have been seldom used in investigating the cultural component of language. In this study, we suggest a scientific methodology to study neurocognitive mechanisms induced by the interaction between multi-linguistics and cross-culture differences, especially during translation between a source language (SL) and a target language (TL). Using a contest of tonal languages (Chinese) and atonal language (English) multilingual exchange, we opine that translation theories as numerous and efficacious as they are, lack the competence to bring absolute clarity into the complex cross-cultural dimension of languages when it comes to accuracy in translation. Echoing this, this study attempts to apply neuroscience in blending cross-cultural diversity and neurolinguistics as a one-in-all translation approach to “multicultural neurolinguistics” between an SL and a given TL. The linguistic examination of this study proves that “multicultural neurolinguistics” will provide a unique framework for all translation barriers, and establish a cross-cultural and multilingual network depending on the particular circumstance. This research contributes to the linguistic literature by bringing a “multicultural neurolinguistics” resolution to the cultural diversity question in translation.
Neuroeconomics has been seldom used in investigating the impact of culture on international trade. This research proposes a scientific approach to investigate how cross-cultural differences contribute to the conceptualization of international trade patterns globally. International business relations are directly influenced by factors such as cultural variations which distinguish one foreign market from another. Therefore, the level of understanding these cultural differences is able to determine the success or not of business opportunities. In response to the scarcity of scientific investigation of cultural influence on international trade, the purpose of this study is to propose a neuroeconomic framework as a strategic instrument to elucidate the cross-cultural dimension of international commercial relations. Echoing this, our study uses cultural diversities and cognitive classifications established in literature to adopt a unique scientific tool for the conceptualization of international trade patterns across the world. This research establishes the cognitive mechanism of cross-cultural diversity, as a novel framework to conceptualize international trade patterns. By unveiling the cognitive process of cross-cultural diversity, this article provides an instrument to unlock trade barriers of individualism and collectivism across nations.
Grounded on the cognitive consistency theory, this paper adopts the prime-probe paradigm and Electroencephalography (EEG) experiment to examine the impact of country-of-origin (COO) stereotypes-brand positioning congruence on consumer behavior, the boundary effect of brand positioning strategy, as well as the underlying cognitive mechanism. Behaviorally, consumers show a higher purchase intention in the congruence condition. Moreover, this congruence effect of purchase intention can be found for competence brand positioning strategies rather than warmth brand positioning strategies. At the brain level, we found that compared with the congruence condition, the incongruence condition enhances consumers' cognitive conflict, reflected in enhanced frontal theta-band oscillation. Furthermore, the cognitive conflict effect is accentuated in the competence positioning strategy condition rather than the warmth strategy positioning condition, confirming the boundary effect of brand positioning strategy from the brain level. These findings provide neural evidence that the congruence between COO stereotypes and brand positioning influences consumer purchase behavior, reveals a boundary effect in the COO stereotype-brand positioning congruence, and highlights the importance of the competence dimension. Finally, the theoretical and practical implications are discussed.