1,279
views
0
recommends
+1 Recommend
4 collections
    0
    shares

      UCL Press journals including UCL Open Environment have now moved website.

      You will now find the journal, all publications, reviews and submission information at https://journals.uclpress.co.uk/ucloe

       

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Self-Perceived Loneliness and Depression During the COVID-19 Pandemic: a Two-Wave Replication Study

      Preprint
      research-article
      This is not the latest version for this article. If you want to read the latest version, click here.
      Bookmark

            Revision notes

            Editor: Dr Matthew O. Gribble

             

            [E1.1] What is the connection between this work and the primary themes of this environmental journal? Please be sure to clearly and properly situate the work within Planetary Health, GeoHealth, One Health, or another environmental health paradigm to contextualize your research on loneliness and depression as connected to an environmental challenge. This journal has primarily environmental audiences.

            >> Thanks for your question. This paper captures the impact of the COVID-19 pandemic on individual’s immediate environments, specifically the effects of lockdown restrictions (e.g., forced limitations on people’s physical spaces, social interactions with others) and duration on individuals’ mental and physical health. Specifically, people’s immediate home environment has become more than just a space to live but simultaneously also people’s space to work, socialize, and play. The lockdown restrictions including strict stay-at-home orders have forcefully impacted people’s immediate environment and in turn their health. We have now inserted a few sentences in the Introduction and Discussion sections to better flesh out these concepts (Lines 10-19 and 259-261 of the manuscript).

             

            [E1.2] Several of the reviewers raised concerns about methodological rigor, in particular regarding the data analysis approach. The investigators should better justify their approach or, if they agree with the reviewer that the assumptions of their approach are inconsistent with the design of the study, use an alternative approach. The investigators do not necessarily need to adopt the specific statistical test recommended by the reviewer, as there are other options that would also be appropriate, but the choice of data analysis approach should be clearly presented with the key assumptions of that approach and why that is suitable for the study design, data structure, and scientific hypothesis (or parameter) of interest.

            >> We have now clarified the rationale guiding our analytical approach in the manuscript. See, for instance Lines 165-172 and Lines 201-210.

             

            [E1.3] It may be difficult for the authors to respond to the sample size concerns raised regarding the approach chosen for data analysis; some discussion of what the implications of this sample size limitation may be for results and overall study conclusions would be appropriate. Sensitivity analyses to assess robustness of findings to model assumption violations, interpolation over sparse data, etc. are always welcome.

            >> Given the small sample size, we have now removed the statistical analysis for wave 2, and kept only the graphical inspection of data distribution. We highlighted throughout the manuscript that the graphical inspection must be considered only as a qualitative and preliminary insight into data as we could not run any statistical analysis. We have also discussed the implication of the small sample size in the manuscript at Lines 307-311. The section is as follows:

            “These results have to be considered only as a qualitative and preliminary insight, since the sample size collected for the weeks of interest did not allow to make any meaningful statistical inference. In fact, graphical disparities among scores might be mere random variation and they might not reflect real differences.”

             

            Reviewer 1: Dr Youyou Wu

             

            The paper has two goals: the first goal is to replicate a previous using the same dataset but a different machine learning model. The previous finding was that “perceived loneliness”, among 12 mental health indicators, is most related to time into a COVID lockdown in the UK. The second goal is to confirm a u-shape relationship between perceived loneliness and weeks into the lockdown, using a different dataset from the second national lockdown.

             

            [R1.1] My biggest concern is that there is little discussion on the effect size. We only knew about the MSE of the overall model and that “perceived loneliness” is relatively more related to time into lockdown than other variables (but not by how much).  The authors mentioned in their previous paper (CITE) that the overall performance is bad, which I’d agree even without comparing the MSE or R2 with other similar machine learning tasks. Therefore, among a collection of highly correlated mental health variables that together are not so related to time into a lockdown, does it really matter that we identify the one that’s slightly more related to time? I’d like to see more justification of how this analysis is meaningful, taking into account effect sizes.

            >> Thank you for the comment. In this paper, depressive symptoms resulted to be the most time-sensitive variable in the dataset, as for the SVR and MLR models. The machine learning approach was adopted uniquely to help us identify and select in a data-driven way the variable to study under a statistical approach. This, in our opinion, is its usefulness. In fact, the models were trained not because we were interested in accurately predicting the week in lockdown per se, but because we wanted to obtain an objective ranking of how time-sensitive all the variables were and then to study the most time-sensitive one, even if it was only slightly more related to time in lockdown. We have now added effect sizes for the statistical analysis and we have clarified the significance of the analysis in the text (Lines 271-274) as:

            “Since the focus of the study was not to assess the variables' predictive capability per se, it is worth noting that the low model performance does not affect the reliability of the variable importance ranking and, therefore, the identification of the most time-sensitive variable in the dataset [19].”

             

            [19]  Carollo, A., Bizzego, A., Gabrieli, G., Wong, K. K. Y., Raine, A., & Esposito, G. (2021). I'm alone but not lonely. U-shaped pattern of self-perceived loneliness during the COVID-19 pandemic in the UK and Greece. Public Health in Practice, 2, 100219.

             

            [R1.2] Now assuming the purpose of the analysis is justified, I move on to talk about the mechanics of the machine learning task. The analysis is based on a sample of 435 participants, which is admittedly quite many for a longitudinal study but small for a machine learning task. The authors are quite right on the need to replicate the effect using a different model given the small sample. Going down that route, I’d recommend go as far as replicating it using multiple models beyond the SVR to see if they agree. Having said that, I’d argue it’s more important to replicate the finding across different data sources than using a different model. I hope the authors could search other longitudinal data sources with similar variables and replicate the findings. At the very least, it will be good to know from the paper that there is no other suitable data source for this question and the finding based on this one dataset is preliminary.

            >> Thank you for the suggestion. We have now replicated the analysis also by using a Multiple Linear Regression (MLR) model and depressive symptoms were again the most time-sensitive variable. Moreover, we agree with the Reviewer on the usefulness of replicating the results across data sources, but we did not find a dataset similar enough to ours in the literature. Hence, in the final part of the manuscript (Lines 330-334), we have now specified that it would be useful to replicate the findings across similar data sources and that our findings can only be considered as preliminary:

            “Furthermore, to fully pursue the replication aims of the current study, it would be useful to apply the same machine learning and statistical approach across different data sources. As we did not find any dataset similar enough to the one we adopted, the results from the current paper can only be considered as preliminary.”

             

            [R1.3] If I am reading Table 2 correctly, the sample size seems incredibly small (5 participants from week 3, and 2, 3, and 1 participant from week 4, 5, and 6 for the second analysis. The week-by-week comparison would not be meaningful at all given the small sample. Hence the data from the second wave is not suitable for confirming or rejecting the U-shape finding in the first wave.

            >> Thank you for the comment. Given the small sample size, we have now removed the statistical analysis for wave 2, and kept only the graphical inspection of data distribution. We have now clarified that the part regarding data from wave 2, considering the small sample sizes by week, has only a qualitative and preliminary value for the current study. We have clarified these aspects in the Abstract and in the manuscript (Lines 217-221 and Lines 307-311):

            “Furthermore, despite the sample size by week in wave 2 was too small for having a meaningful statistical insight, a qualitative and descriptive approach was adopted and a graphical U-shaped distribution between week 3 and 9 of lockdown was observed.”

             

            “It is worth noting that, considering the limited sample size that was available for wave 2 from week 3 to 9, no statistically meaningful insight could be derived from the comparisons of groups, so the second part of the study can only have a qualitative and descriptive significance, and must be considered as a preliminary approach.”

             

            “These results have to be considered only as a qualitative and preliminary insight, since the sample size collected for the weeks of interest did not allow to make any meaningful statistical inference. In fact, graphical disparities among scores might be mere random variation and they might not reflect real differences.”

             

            Reviewer 2: Dr Clarissa Ferrari

             

            The manuscript faces a topical issue regarding the interrelations between mental health assessment and the lockdown duration. The gathered data are of great interest and constitute a strength point of the study. In addition, the application of machine learning techniques   in such a context represents an added value. However, many methodological problems considerably mitigate my enthusiasm.  My major concerns regard the poor readability of the study and the choice to model as outcome variable the lockdown duration. The poor readability is mainly due to lack of specifications, descriptions and details that make the analyses hard to reproduce. A paramount purpose of a scientific paper should be the reliability and the reproducibility of the results through a detailed description of the applied models and methods (it should be very useful to add -maybe- in supplementary materials the code or pseudo-code used for the analysis). 

            >> Thank you for the suggestion. We made the study more readable by modifying the manuscript as suggested and by adding further details on the machine learning approach. To facilitate the reproducibility of our study, we have also added the codes at the following link: https://doi.org/10.5522/04/20183858. We added this information at Lines 130-131 of the manuscript.

             

            The second concern regards the SVR approach and in particular the choice to predict the lockdown duration. It is not clear the rationale for which the mental health variables should predict the lockdown duration! It will be reasonable to assess the reverse relation, i.e. the relation between the duration of lockdown (as predictor of) and mental health variables (as outcomes/dependent variables). Other major and minor comments and suggestions are reported below.

            >> In the study, we designed the model to predict the independent variable by starting from the dependent variables in order to obtain an index of time-sensitivity. We have clarified the approach at Lines 165-172 of the manuscript as:

            “The assumption behind this approach was that the independent variable “Weeks in lockdown" would modulate, to a different extent, the scores of the dependent variables included in the dataset. Particularly, the most time-sensitive variable would be strongly modulated by time in lockdown and its scores would systematically co-vary with the variable “Weeks in lockdown". Therefore, the most time-sensitive variable would also be the most informative and important for the model when trying to predict “Weeks in lockdown".”

             

            Abstract:

            [R2.1] Abstract is quite difficult to read. It is the first part on which a reader focus his/her attention so it has to  convey clearly the main information. Please try to re-edit the abstract with well separated subsections of background, methods, results and conclusions

            >> We have modified the abstract to facilitate the reading. Moreover, we have inserted the suggested subsections of Background, Methods, Results, and Conclusions.

             

            [R2.2] Lines 18-19: Please clarify here that this study exclude Greek sample

            >> We have now specified that the study only focuses on data from the UK lockdowns. The specific part in the abstract reads as follows:

            “The current paper aimed to test the robustness of these results by focusing on data from the first and second lockdown waves in the UK.”

             

            [R2.3] Line 19: aim a) is not clear, dependence on...what? Please specify

            >> We have rephrased as “[...] we tested a) the impact of the chosen model on the identification of the most time-sensitive variable in the period spent in lockdown.”

             

            [R2.4] line 27: the most important variable in... predicting…what? please clarify

            >> We have rephrased it as “the most time-sensitive variable”.

             

            Introduction

            [R2.5] lines 74-65. From a statistical point of view,  the sentence "found a statistically significant U-shaped pattern" does not make sense without further specifications. Did the authors test the U-shape with a kolmogorov test for distributions?

            >> Thank you for the comment. The U-shaped distribution was not tested with a Kolmogorov test for distributions. Rather, it was suggested by the graphical distribution of self-perceived loneliness scores across weeks in lockdown and tested with multipair and pairwise Kruskal-Wallis tests. We have clarified this aspect at Lines 41-49 of the manuscript. The updated version is as follows:

            “Specifically, participants from the UK who took part in the study during week 6 of national lockdown reported significantly lower levels of self-perceived loneliness compared to their counterparts who completed the survey during week 3 of lockdown. Likewise, lower levels of self-perceived loneliness were observed for participants who completed the survey in weeks 4 and 6 of the Greek national lockdown. This pattern of results together with a graphical inspection suggested the existence of a U-shaped distribution in self-perceived loneliness levels by weeks in lockdown in both the UK and Greece.”

             

            [R2.6] Lines 81-83. Aim a) seems a validation of a previous applied method and as such, It should have been done in the previous paper. Paramount purposes of a scientific paper are the reliability and robustness of results (i.e. results should be robust in terms of methodological approach or model used). If the Authors in this study will find  different results from their previous one, they are denying themselves. Please explain better or provide a justification for this controversial aim.

            >> Thank you for the comment. In the first part of the paper, we wanted to examine whether, by using different predictive models, new time-sensitive variables (overlooked by the RandomForest model) to study under a statistical approach would emerge. We have clarified this aspect at Lines 58-63 of the manuscript as:

            “In this way, we wanted to verify if, when changing the predictive model, new variables with different patterns of time-sensitivity could be identified and studied under a statistical approach. This would provide insight into other time-sensitive variables that might have been overlooked by the previously adopted model - namely, the RandomForest model.”

             

            [R2.7] Line 86-87: please change the sentence “unique opportunity”. Actually, every researcher should be able to replicate a previous study, this is a prerogative of a scientific paper!

            >> We agree with the Reviewer and we have rephrased in Line 69 of the manuscript as:

            “The current study provides the opportunity to uncover other aspects that may be significantly influenced by the lockdown restrictions in both the first and second waves of lockdown.”

             

            Table 1

            [R2.8] to improve readability and interpretability of the assessment scale, please provide the range for each of them in table 1

            >> We added the range of observed scores for each variable in Table 1.

             

            [R2.9] it is not clear why some instrument have Cronbach'a alpha value and other not. Please explain. Moreover, the use of Cronbach'a alpha (for evaluating the internal consistency) should be described in somewhere in the Data  analysis section.

            >> Thank you for your question. We computed Cronbach’s alpha for all the scales adopted in the study. The only exception is for the variables related to the Physical Activity domain, as they consisted of single items (we clarified this aspect in the caption of Table 1). Also, in the Data analysis section we have now specified the use of Cronbach’s alpha as an index of internal reliability in Lines 88-89 of the manuscript. 

             

            Participants section

            [R2.10] line 127-132 should be part of a methodological/data analysis section and not of participants section

            >> We have now moved the part in the Data Analysis subsection (now Lines 131-139).

             

            [R2.11] lines 135-138: the sentences reported in these lines allude the presence in table 2 of demographic features, please re-edit (same holds for lines 160-163)

            >> Thank you for the comment. To clearly reflect the content of Table 1, we have moved it in the Data Analysis section, after the description of the procedure to compute the “Week in lockdown” variable (Lines 131-139 of the manuscript).

             

            Data Analysis section

            [R2.12] line 169: Different model with respect to...which one?

            >> We have rephrased the sentence in Lines 145-148 of the manuscript as: 

            “As compared to the RandomForest model adopted in Carollo et al. [19], in the current work we used two different machine learning models to identify the most time-sensitive variable (out of the 12 indices included).”

             

            [19]  Carollo, A., Bizzego, A., Gabrieli, G., Wong, K. K. Y., Raine, A., & Esposito, G. (2021). I'm alone but not lonely. U-shaped pattern of self-perceived loneliness during the COVID-19 pandemic in the UK and Greece. Public Health in Practice, 2, 100219.

             

            [R2.13] line 169: data-driven not data-drive

            >> Thank you. We have fixed the error.

             

            [R2.14] Line 170: most influential in...what? maybe influential in explaining (or for) Self-Perceived Loneliness and Depression I guess.  Please explain.

            >> We have rephrased it in Lines 145-148 of the manuscript as: “[...] we used two different machine learning models to identify the most time-sensitive variable (out of the 12 indices included).”

             

            [R2.15] Lines 179-185. Paramount targets of a scientific paper are the readability and (mainly) the reproducibility. The authors should provide all the necessary details and explanation to: i) easily understand the purpose and the results of each applied methods, and ii) to replicate the analyses. Please explain: 1) which is the final purpose of the SVR, 2) the choice of 10x15 values for the cross-validation, 3)  the choice of splitting in 75% vs 25%  for training and test set. Moreover, nowhere is specified the output/dependent  variable of SVR.

            >> We have added further information regarding the machine learning approach in the text. The parts read as:

            - Lines 160-163: “While RandomForest's predictions are based on the creation of an ensemble of decision trees from the input variables, SVR is rooted on the derivation of a best-fit hyperplane and the MLR on linear relations between variables.”

             

            - Lines 175-178: “The cross-validation and the train-test split procedures are common practice in machine learning as they help to control the model's overfitting by evaluating the model's performances on unseen data [37].”

             

            - Lines 163-165: “Data from 12 variables of interest (outlined in Table 1) were included in the models to predict the independent variable “Weeks in lockdown".”

             

            [37] A. Bizzego, G. Gabrieli, M. H. Bornstein, K. Deater-Deckard, J. E. Lansford, R. H. Bradley, M. Costa, G. Esposito, Predictors of contemporary under-5 child mortality in low-and middle-income countries: a machine learning approach, International journal of environmental research and public health 18 (2021) 1315.

             

            [R2.16] lines 189-192 are unclear, please explain.

            >> We have clarified the sentence at Lines 184-188 of the manuscript. The part is as follows:

            “On all the training' importance rankings, we computed a Borda count to determine the most important and informative variable for the model's prediction of the Weeks in lockdown. Borda count is a method to derive a single list summarizing the information coming from a set of lists [38].”

             

            [38] G. Jurman, S. Riccadonna, R. Visintainer, C. Furlanello, Algebraic comparison of partial lists in bioinformatics, PloS one 7 (2012) e36540.

             

            [R2.17] Lines 196-202. Kruskal-wallis test is the corresponding non-parametric test of ANOVA test for comparing independent samples. Here the Authors declare to compare variable changes over time, i.e. to compare correlated data (?) If so, the Kruskal-wallis is not the right test to use. If the Authors want to compare same variable, evaluated on same sample, across time they have to use the Friedman test. Differently, if the Authors want to compare independent sample, this should be better explained.

            >> The study is cross-sectional and participants were divided in groups by week in lockdown. The multipair Kruskal-Wallis test was adopted to compare median scores across groups of participants (i.e., participants that took part in week 3 vs participants who took part in week 4 vs participants who took part in week 5, and so on). If the multipair test resulted significant, we assessed the differences between independent samples of “Week in lockdown” with post-hoc pairwise Kruskal-Wallis tests. We clarified this aspect in the manuscript at Lines 201-210 as:

            “As the study had a cross-sectional design across waves of lockdown, participants were grouped by the “Week in lockdown" variable. “Week in lockdown" groups were compared in terms of scores reported for the identified most time-sensitive variable. In this way, a significant result in the multipair Kruskal-Wallis test would indicate that levels of the identified variable significantly differed by “Weeks in lockdown" for at least two groups of weeks. ​​If the multipair Kruskal-Wallis test suggested the existence of significant weekly variations, we conducted multiple pairwise Kruskal-Wallis tests with Bonferroni correction to compare week 7 scores to other weeks.”

             

            Results

            [R2.18] lines 221-222. I am sorry, but I can see a clear U-shape in figure2. Please explain.

            >> We have now clarified in the text at Lines 231-236 of the manuscript as:

            “A closer look at boxplots representing depressive symptoms divided by week in lockdown suggests that, from week 3 to 7, the median score decreased in the first period (week 3 to week 4) and then increased again (from week 4 to week 7; see Figure 2). A decrease followed by an increase in scores suggests a U-shaped pattern for depressive symptoms in the first wave of UK lockdown.”

             

            Discussion

            [R2.19] line 254. The reader has to reach the discussion section to know which is the outcome variable under investigation with SVR: the lockdown duration. Moreover, is not clear the rationale for which the mental health variables should predict the lockdown duration. It would be reasonable to assess the reverse relation, i.e. the relation between the duration of lockdown (as predictor of) and mental health variables (as outcomes/dependent variables). In fact, seems to me that the true intention of the Authors to assess the reverse relation is revealed by the statement in lines 304-306. It is worth to note this point that appears crucial. The nature of variables cannot be ignored. The lockdown duration cannot be a random variable since it is measured without error and it is the same for all subjects involved in the survey. Conversely, the mental health variables are random variables because they vary among subjects.  In light of this, the whole paper medialisation should be rethought by considering the mental health variables as the main outcomes (the target variables) in relation with/affected by lockdown duration.

            >> We have now clarified in the Methods section that “Data from 12 variables of interest (outlined in Table 1) were included in the models to predict the variable “Weeks in lockdown". Furthermore, we have clarified the rationale for using the dependent variables to predict the independent variable (Lines 165-172).

             

            Reviewer 3: Giulia Balboni

            I enjoyed reading the paper and think that this may be an excellent opportunity to present the learning approach and its utility in the field of mental health. I would suggest the Authors emphasize this uniqueness. This paper is an excellent opportunity to introduce this method and show its advantage compared to the methods usually used in the field. Nevertheless, for this aim, the machine learning approach must be described profoundly, and all the assumptions and characteristics must be explicated using an appropriate scientific language that may be easily understood.

             

            [R3.1] Line 178, what are the differences between the models used, Random Forest and Support Vector Regressor? Why may it be interesting to study if two different models produce the same results?

            >> Thank you for the question. We have clarified these aspects at Lines 58-63 and 160-163 of the manuscript. The parts are the following: 

            “In this way, we wanted to verify if, when changing the predictive model, new variables with different patterns of time-sensitivity could be identified and studied under a statistical approach. This would provide insight into other time-sensitive variables that might have been overlooked by the previously adopted model - namely, the RandomForest model.”

             

            “While RandomForest's predictions are based on the creation of an ensemble of decision trees from the input variables, SVR is rooted on the derivation of a best-fit hyperplane and the MLR on linear relations between variables.”

             

            [R3.2] Line 186, Please describe the Mean Squared Error. Is there any cutoff or value range that may allow the reader to understand the present study's findings?

            >> We described the Mean Squared Error in the manuscript at Line 179-183. As the MSE has a descriptive function, there are no standard cutoffs or acceptable limits for its values. The part is:

            “In particular, the models' performances were evaluated by Mean Squared Error (MSE), which consists of the average squared difference between predicted and real values. Thus, a lower MSE value corresponds to a higher overlap between the real and predicted data.”

             

            [R3.3] Line 194, please describe the parameter C. What does it represent? Is there any cutoff or value range that may allow the reader to understand the present study's findings?

            >> In SVR models, C is the regularization cost parameter which determines the tradeoff cost between minimizing the training error and minimizing the model complexity. As such, there are no cutoff values for it. We have now clarified it in the manuscript at Lines 190-192 as: 

            “In SVR, the parameter C is a cost regularization parameter which determines the trade-off cost between minimizing the training error and minimizing model complexity [39].”

             

            [39] C.-H. Wu, G.-H. Tzeng, R.-H. Lin, A novel hybrid genetic algorithm for kernel function and parameter optimization in support vector regression, Expert Systems with Applications 36 (2009) 4725–4735

             

            [R3.4] Line 224, Figure 1, please describe the metric used for the importance

            >> We have clarified in the Figure’s caption as: 

            “The importance of the variables was derived from the trained predictive models as the absolute value of the variables’ weights or coefficients for the SVR and MLR, respectively.”

             

            [R3.5] Line 259, based on which data it can be said that depression symptoms were the best at predicting lockdown duration in weeks?

            >> We can say that depression symptoms were the most informative when trying to predict the week in lockdown based on the model’s importance ranking. We specified it better in the manuscript at Line 265-269. The part is:

            “Based on the variables importance ranking, depressive symptoms, over and above the other 11 health indices, were the most important variable for both the SVR and MLR models when determining the model best-fit to the data and were the best at predicting lockdown duration in weeks.”

             

            [R3.6] Line 102, is that randomized in the order of the questionnaires?

            >> Yes, the order of the questionnaires was randomized.

             

            [R3.7] Lines 137 and 163, please, also describe the age range

            >> We have added the age ranges in Lines 112 and 127 of the manuscript.

             

            [R3.8] Line 196, please justify using the non-parametric statistical test or any other tests that will be used and compute the effect size for any significant statical results found.

            >> We have justified the use of the Kruskal-Wallis test (Lines 201-211 of the manuscript) and computed the effect size for significant results. The part regarding the use of the Kruskal-Wallis test is:

            “As the study had a cross-sectional design across waves of lockdown, participants were grouped by the “Week in lockdown" variable. “Week in lockdown" groups were compared in terms of scores reported for the identified most time-sensitive variable. In this way, a significant result in the multipair Kruskal-Wallis test would indicate that levels of the identified variable significantly differed by “Weeks in lockdown" for at least two groups of weeks. If the multipair Kruskal-Wallis test suggested the existence of significant weekly variations, we conducted multiple pairwise Kruskal-Wallis tests with Bonferroni correction to compare week 7 scores to other weeks. Eta-squared was computed to estimate the magnitude of significant results [40, 41].”

             

            [40] A. Carollo, W. Chai, E. Halstead, D. Dimitriou, G. Esposito, An exploratory analysis of the effect of demographic features on sleeping patterns and academic stress in adolescents in China, International Journal of Environmental Research and Public Health 19 (2022) 7032.

             

            [41] M. Tomczak, E. Tomczak, The need to report effect size estimates revisited. an overview of some recommended measures of effect size, Trends in sport sciences 1 (2014) 19–25.

             

            [R3.9] What is the utility to having found a U-shape?

            >> Thanks for your question. In our opinion, one important implication of the finding of a U-shape pattern in symptoms as a function of lockdown duration is in the management of an individual's expectations of lockdowns. As our study findings have shown, levels of symptoms are not stable within lockdown periods, and thus, governments/healthcare workers should know when population’s may need the most support. By conveying this to individuals too, at the start of any lockdown stay-at-home order would greatly help individuals to ‘expect’ potential fluctuations. By informing healthcare workers, you can better prepare them for what support and when these need to be deployed to help individuals better cope with mental health symptoms. Overall, this knowledge can help manage expectations in populations and support systems to ensure that resources are allocated effectively, especially in future lockdown environments. We have clarified this at Lines 315-324 of the manuscript as:

            “In conclusion, both self-perceived loneliness and depressive symptoms appear to follow U-shaped curves across periods of lockdown (although no significant difference emerged for scores of self-perceived loneliness by week in the second wave of the UK lockdown). Knowing the unfolding of these trajectories might be helpful for conveying the adequate support to the population in lockdown with the right timing. People might also be made aware of the possible fluctuations in self-perceived loneliness and depressive symptoms throughout the lockdown period. Overall, this knowledge can help manage expectations in populations and support systems to ensure that resources are allocated effectively, especially in future lockdown environments.”


             

            [R3.10] May it be interesting to verify the invariance of the results across age or gender?

            >> We agree with the Reviewer’s idea on the potential useful insight provided by an investigation of individuals' differences in terms of age and gender. Nevertheless, this was out of the article’s aims and interests and we preferred not to add another different investigation. We thank the Reviewer for the suggestion and we added this as a possible future direction of research at Line 327 of the manuscript.

             

            [R3.11] I think that the sample size for each week in the second wave is too small to allow any comparison also with a non-parametric test.

            >> We agree with the Reviewer. Given the small sample size by week, we have removed the statistical analysis for the second wave of lockdown and maintained only the graphical inspection. We have now clarified that the part regarding data from wave 2, considering the small sample sizes by week, has only a qualitative and preliminary value. We have clarified these aspect in the Abstract and in the manuscript (Lines 217-221 and Lines 307-311):

            “Furthermore, despite the sample size by week in wave 2 was too small for having a meaningful statistical insight, a qualitative and descriptive approach was adopted and a graphical U-shaped distribution between week 3 and 9 of lockdown was observed.”

             

            “It is worth noting that, considering the limited sample size that was available for wave 2 from week 3 to 9, no statistically meaningful insight could be derived from the comparisons of groups, so the second part of the study can only have a qualitative and descriptive significance, and must be considered as a preliminary approach.”

             

            “These results have to be considered only as a qualitative and preliminary insight, since the sample size collected for the weeks of interest did not allow to make any meaningful statistical inference. In fact, graphical disparities among scores might be mere random variation and they might not reflect real differences.”

            Abstract

            \hl{\textbf{Background:} The global} COVID-19 \hl{pandemic has forced countries to impose strict} lockdown restrictions \hl{and mandatory stay-at-home orders with varying impacts on individual's health}. Combining a data-driven machine learning paradigm and a statistical approach, our previous paper documented a U-shaped pattern in levels of self-perceived loneliness in both the UK and Greek populations during the first lockdown (17 April to 17 July 2020). The current paper aimed to test the robustness of these results by focusing on data from the first and second lockdown waves in the UK. Methods:We tested a) the impact ofthe chosen model on the identification of the most time-sensitive variable in the period spent in lockdown. Twonew machine learning models- namely, support vector regressor (SVR) and multiple linear regressor (MLR) were adopted to identify the most time-sensitive variable inthe UK dataset from wave 1 (n = 435). In the second part of the study, we tested b) whether the pattern of self-perceived loneliness found in the first UK national lockdown was generalizable to the second wave of UK lockdown (17 October 2020 to 31 January 2021). To do so,data from wave 2 of the UK lockdown (n = 263) was used to conduct a graphical inspection of the week-by-week distribution of self-perceived loneliness scores. Results:In both SVR and MLR models,depressive symptoms resultedto be the most time-sensitivevariable during the lockdown period. Statistical analysis of depressive symptoms by week of lockdown resulted in a U-shapedpattern between week 3 to 7 of wave 1 of the UK nationallockdown. Furthermore,despite the sample size by week in wave 2 was too small for having a meaningful statistical insight, a graphical U-shaped distributionbetween week 3 and 9 of lockdown was observed. Conclusions:Consistent with past studies, these preliminary resultssuggest that self-perceived loneliness and depressive symptoms may be two of the most relevant symptoms to address when imposing lockdown restrictions.

            Content

            Author and article information

            Journal
            UCL Open: Environment Preprint
            UCL Press
            30 June 2022
            Affiliations
            [1 ] Department of Psychology and Cognitive Science, University of Trento, Italy
            [2 ] School of Social Sciences, Nanyang Technological University, Singapore
            [3 ] Department of Psychology and Human Development, University College London, London, UK
            [4 ] Departments of Criminology, Psychiatry, and Psychology, University of Pennsylvania
            Author notes
            Author information
            https://orcid.org/0000-0002-2737-0218
            https://orcid.org/0000-0002-1586-8350
            https://orcid.org/0000-0002-9846-5767
            https://orcid.org/0000-0002-2962-8438
            https://orcid.org/0000-0002-3756-4307
            https://orcid.org/0000-0002-9442-0254
            Article
            10.14324/111.444/000095.v2
            ae85a8bc-d49c-4118-9033-fa91db636a2b

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0 https://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 8 October 2021
            Funding
            UCL Global Engagement Fund 563920.100.177785

            The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
            Psychology,Clinical Psychology & Psychiatry,Public health
            COVID-19,depression,lockdown,loneliness,global study,machine learning,SARS-CoV-2,Health

            Comments

            Date: 01 September 2022

            Handling Editor: Prof Dan Osborn

            Editorial decision: Request revision. The Handling Editor requested revisions; the article has been returned to the authors to make this revision.

            2022-09-01 16:06 UTC
            +1

            Date: 21 July 2022

            Handling Editor: Prof Dan Osborn

            The article has been revised, this article remains a preprint article and peer-review has not been completed. It is under consideration following submission to UCL Open: Environment for open peer review.

            2022-07-21 13:59 UTC
            +1

            Comment on this article