当前共找到 1086 篇文献分享,本页显示第 901 - 920 篇。
901.
张德祥 (2022-06-30 16:36):
#paper https://doi.org/10.3390/e24060819 Competency in Navigating Arbitrary Spaces as an Invariant for Analyzing Cognition in Diverse Embodiments 我们在3d空间的导航能力熟视无睹,其实不然,很多女士对方位非常困惑;另外从认知角度看,各种认知能力及不同的知识的掌握都类似于在特定的数学空间的导航能力。这篇论文分析了生物空间的不同导航情况,比如dna的展开,身体发育的展开;这些空间的适用性行为在大脑之前就存在,而且很稳定,很智能;作者从自由能的主动推理推导出一个动作空间的抽象,更多可参考:https://mp.weixin.qq.com/s/e6xmn7Xo-mp9UuuxKWVJ6g
Abstract:
One of the most salient features of life is its capacity to handle novelty and namely to thrive and adapt to new circumstances and changes in both the environment and … >>>
One of the most salient features of life is its capacity to handle novelty and namely to thrive and adapt to new circumstances and changes in both the environment and internal components. An understanding of this capacity is central to several fields: the evolution of form and function, the design of effective strategies for biomedicine, and the creation of novel life forms via chimeric and bioengineering technologies. Here, we review instructive examples of living organisms solving diverse problems and propose competent navigation in arbitrary spaces as an invariant for thinking about the scaling of cognition during evolution. We argue that our innate capacity to recognize agency and intelligence in unfamiliar guises lags far behind our ability to detect it in familiar behavioral contexts. The multi-scale competency of life is essential to adaptive function, potentiating evolution and providing strategies for top-down control (not micromanagement) to address complex disease and injury. We propose an observer-focused viewpoint that is agnostic about scale and implementation, illustrating how evolution pivoted similar strategies to explore and exploit metabolic, transcriptional, morphological, and finally 3D motion spaces. By generalizing the concept of behavior, we gain novel perspectives on evolution, strategies for system-level biomedical interventions, and the construction of bioengineered intelligences. This framework is a first step toward relating to intelligence in highly unfamiliar embodiments, which will be essential for progress in artificial intelligence and regenerative medicine and for thriving in a world increasingly populated by synthetic, bio-robotic, and hybrid beings. <<<
翻译
902.
张德祥 (2022-06-30 16:23):
#paper https://doi.org/10.3389/fncom.2020.00041 An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case 概念学习是AI的难点,如果学习新概念,去掉冗余概念或冗余认识,提高认知的泛化,及使用无监督学习,这几个难点合在一起更难,这篇论文对概念学习进行了尝试和验证,给出了良好结果的实验,值得参考:https://mp.weixin.qq.com/s/lSkIsuTiDESVBxZcm9PY-w
Abstract:
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process … >>>
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning-and specifically state-space expansion and reduction-within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) "slots" that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning-associated with these slots-can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of "one-shot" generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer. <<<
翻译
903.
张德祥 (2022-06-30 16:22):
#paper https://doi.org/10.1016/j.neunet.2021.09.011 World model learning and inference 最近lecun 提出了他的AGI世界模型架构,lecun名气从深度学习的奠基而来,生物认知方面还是有所欠缺,这篇论文的第二部分的概述层次深入,逐步递进,讲解了从感知的不同时间维度,从感知到动作到推理的层次递进。很有深度,看参考:https://mp.weixin.qq.com/s/MwBCBIvRG5HdcDwJL0rK5w
Abstract:
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various … >>>
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world. <<<
翻译
904.
prayer (2022-06-30 11:49):
#paper doi:10.1016/j.cell.2022.04.003; Cell, 2022, Spatiotemporal transcriptomic atlas of mouse organogenesis using DNA nanoball-patterned arrays:华大基因5月Cell发文,使用Stereo-seq技术(大视野,单细胞分辨率,灵敏度高)绘制了不同胚胎时期小鼠器官发生的时空转录组图谱。技术方法不是看的很懂,有待进一步学习。 附原始数据链接:CNP0001543(https://db.cngb.org/search/project/CNP0001543)
IF:45.500Q1 Cell, 2022-05-12. DOI: 10.1016/j.cell.2022.04.003 PMID: 35512705
Abstract:
Spatially resolved transcriptomic technologies are promising tools to study complex biological processes such as mammalian embryogenesis. However, the imbalance between resolution, gene capture, and field of view of current methodologies … >>>
Spatially resolved transcriptomic technologies are promising tools to study complex biological processes such as mammalian embryogenesis. However, the imbalance between resolution, gene capture, and field of view of current methodologies precludes their systematic application to analyze relatively large and three-dimensional mid- and late-gestation embryos. Here, we combined DNA nanoball (DNB)-patterned arrays and in situ RNA capture to create spatial enhanced resolution omics-sequencing (Stereo-seq). We applied Stereo-seq to generate the mouse organogenesis spatiotemporal transcriptomic atlas (MOSTA), which maps with single-cell resolution and high sensitivity the kinetics and directionality of transcriptional variation during mouse organogenesis. We used this information to gain insight into the molecular basis of spatial cell heterogeneity and cell fate specification in developing tissues such as the dorsal midbrain. Our panoramic atlas will facilitate in-depth investigation of longstanding questions concerning normal and abnormal mammalian development. <<<
翻译
905.
masion (2022-06-30 04:24):
#paper doi:10.1111/1365-2745.12025. Identification of 100 fundamental ecological questions. Journal of Ecology. 2013.01作者William J. Sutherland领衔一众科学家对基础生态学研究关键性科学问题的评选活动。他们希望以此为契机,梳理生态学研究的现状并对未来的优先性研究进行展望。评选首先由388位参与人提交了754个问题。然后通过反复的讨论、词语重组织、投票等进一步筛选,最终挑选出100个基础的生态学问题。在Journal of Ecology这篇文章中,这100个问题被划分为与7个主题相关的大类。7个主题分别是:生态与演化(ecology and evolution)、种群(populations)、疾病与微生物(disease and micro-organisms)、群落与多样性(communities and diversity)、生态系统及其功能(ecosystems and functioning)、人类影响与全球变化(human impacts and global change)和方法(methods)。时至今日,其中的一些问题有了显著的进展,比如生态系统恢复力的度量。而更多的问题,仍然有待探索。重新思考这些问题,或许可以对生态学未来的发展,有一定的助益。
Abstract:
Fundamental ecological research is both intrinsically interesting and provides the basic knowledge required to answer applied questions of importance to the management of the natural world. The 100th anniversary of … >>>
Fundamental ecological research is both intrinsically interesting and provides the basic knowledge required to answer applied questions of importance to the management of the natural world. The 100th anniversary of the British Ecological Society in 2013 is an opportune moment to reflect on the current status of ecology as a science and look forward to high-light priorities for future work. To do this, we identified 100 important questions of fundamental importance in pure ecology. We elicited questions from ecologists working across a wide range of systems and disciplines. The 754 questions submitted (listed in the online appendix) from 388 participants were narrowed down to the final 100 through a process of discussion, rewording and repeated rounds of voting. This was done during a two-day workshop and thereafter. The questions reflect many of the important current conceptual and technical pre-occupations of ecology. For example, many questions concerned the dynamics of environmental change and complex ecosystem interactions, as well as the interaction between ecology and evolution. The questions reveal a dynamic science with novel subfields emerging. For example, a group of questions was dedicated to disease and micro-organisms and another on human impacts and global change reflecting the emergence of new subdisciplines that would not have been foreseen a few decades ago. The list also contained a number of questions that have perplexed ecologists for decades and are still seen as crucial to answer, such as the link between population dynamics and life-history evolution. Synthesis. These 100 questions identified reflect the state of ecology today. Using them as an agenda for further research would lead to a substantial enhancement in understanding of the discipline, with practical relevance for the conservation of biodiversity and ecosystem function. <<<
翻译
906.
颜林林 (2022-06-30 00:17):
#paper doi:10.1038/s41597-022-01450-y Scientific Data, 2022, HunCRC: annotated pathological slides to enhance deep learning applications in colorectal cancer screening. 《Nature》子刊《Scientific Data》确实是宝藏。这篇来自匈牙利的论文,就分享了一组很有用的数据。取材了200张H&E染色的结直肠癌的肿瘤组织切片,使用40倍高分辨率扫描全片,然后由病理医生进行标注,从中切分出多个不同类别的图像块,可用于后续结直肠癌的各类病理图像分析研究。值得夸赞的是,从样本采集到数据处理,整个过程有详细描述,数据处理代码、带标注的原始图像、处理后的带分类信息的图像块,全部都开放供直接下载使用。 代码地址: https://github.com/qbeer/qupath-binarymask-extension https://github.com/patbaa/crc_data_paper 原始图像数据: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=91357370 处理后数据: https://figshare.com/articles/dataset/patches_and_local_annotations_slide_200_zoom_124x124_um2/19500266
IF:5.800Q1 Scientific data, 2022-06-28. DOI: 10.1038/s41597-022-01450-y PMID: 35764660
Abstract:
Histopathology is the gold standard method for staging and grading human tumors and provides critical information for the oncoteam's decision making. Highly-trained pathologists are needed for careful microscopic analysis of … >>>
Histopathology is the gold standard method for staging and grading human tumors and provides critical information for the oncoteam's decision making. Highly-trained pathologists are needed for careful microscopic analysis of the slides produced from tissue taken from biopsy. This is a time-consuming process. A reliable decision support system would assist healthcare systems that often suffer from a shortage of pathologists. Recent advances in digital pathology allow for high-resolution digitalization of pathological slides. Digital slide scanners combined with modern computer vision models, such as convolutional neural networks, can help pathologists in their everyday work, resulting in shortened diagnosis times. In this study, 200 digital whole-slide images are published which were collected via hematoxylin-eosin stained colorectal biopsy. Alongside the whole-slide images, detailed region level annotations are also provided for ten relevant pathological classes. The 200 digital slides, after pre-processing, resulted in 101,389 patches. A single patch is a 512 × 512 pixel image, covering 248 × 248 μm tissue area. Versions at higher resolution are available as well. Hopefully, HunCRC, this widely accessible dataset will aid future colorectal cancer computer-aided diagnosis and research. <<<
翻译
907.
颜林林 (2022-06-29 22:30):
#paper doi:10.1002/humu.24424 Human Mutation, 2022, Screening of potential novel candidate genes in schwannomatosis patients. 这篇论文研究的是神经鞘瘤病(Schwannomatosis),是一种由周围神经的神经鞘所形成的肿瘤,该疾病与遗传有很大关系,通常会筛查NF2、SMARCB1和LZTR1这三个基因的胚系突变。然而,仍有相当大比例的患者并不携带这三个基因的突变,提示存在其他致病基因,本文则为寻找这样的基因。研究纳入了来自75个家庭的散发患者,这些患者均经筛查未携带上述三个基因的致病突变,于是采用NGS、MLPA、PCR+Sanger等方法,扩展筛查范围,找到DGCR8、COQ6、CDKN2A和CDKN2B等基因携带致病突变,结合既往文献研究,推断它们与该疾病发生相关,为后续研究该疾病的发病机制提供了证据提示。本文的研究逻辑和方法,也是拓展遗传病致病基因的常规研究套路。
IF:3.300Q2 Human mutation, 2022-10. DOI: 10.1002/humu.24424 PMID: 35723634
Abstract:
Schwannomatosis comprises a group of hereditary tumor predisposition syndromes characterized by, usually benign, multiple nerve sheath tumors, which frequently cause severe pain that does not typically respond to drug treatments. … >>>
Schwannomatosis comprises a group of hereditary tumor predisposition syndromes characterized by, usually benign, multiple nerve sheath tumors, which frequently cause severe pain that does not typically respond to drug treatments. The most common schwannomatosis-associated gene is NF2, but SMARCB1 and LZTR1 are also associated. There are still many cases in which no pathogenic variants (PVs) have been identified, suggesting the existence of as yet unidentified genetic risk factors. In this study, we performed extended genetic screening of 75 unrelated schwannomatosis patients without identified germline PVs in NF2, LZTR1, or SMARCB1. Screening of the coding region of DGCR8, COQ6, CDKN2A, and CDKN2B was carried out, based on previous reports that point to these genes as potential candidate genes for schwannomatosis. Deletions or duplications in CDKN2A, CDKN2B, and adjacent chromosome 9 region were assessed by multiplex ligation-dependent probe amplification analysis. Sequencing analysis of a patient with multiple schwannomas and melanomas identified a novel duplication in the coding region of CDKN2A, disrupting both p14ARF and p16INK4a. Our results suggest that none of these genes are major contributors to schwannomatosis risk but the possibility remains that they may have a role in more complex mechanisms for tumor predisposition. <<<
翻译
908.
尹志 (2022-06-28 22:16):
#paper doi:10.1093/nar/gkac010 Nucleic Acids Research, Volume 50, Issue 8, 6 May 2022, AggMapNet: enhanced and explainable low-sample omics deep learning with feature-aggregated multi-channel networks 基于组学的生物医学数据的学习,通常依赖于高维特征及小样本,而这对于目前的深度学习主流方法而言则是一项挑战。本文首先提出了一种无监督的特征聚合技术AggMap,其作用是基于组学特征的内在固有关联,将组学特征聚合并映射为多通道的二维空间关联特征图(Fmaps)。AggMap在基准数据集上,相较于现有的算法,具有很强的特征重构能力;接着,文章利用AggMap的多通道Fmap作为输入,通过构建多通道深度学习模型AggMapNet,在18个小样本组学基准数据集上取得超过SOTA的性能。而且AggMapNet在噪声数据和疾病分类的问题上展现了良好的鲁棒性。另外,在可解释性方面,AggMapNet的的解释性模块Simply-explainer可以识别COVID19的检测和严重性预测的关键代谢分子和蛋白。 总体上看,文章提出了一个组学小样本数据建模的pipeline:通过无监督算法AggMap的特征重构能力+基于监督信息的可解释的AggMapNet深度学习模型。 几点启发:这个工作将小样本组学数据通过一个pipeline完成学习,我们可以将这个pipeline理解为特征重表示(AggMap)+DL网络(AggMapNet)。我们看到,这个过程不是端到端的,而是充分利用了对特征的重表示,挖掘新的特征空间的表征能力。有点返璞归真的意思,但又考虑到高维性质,不容易手工构造特征,因此在特征部分,用到了很多无监督聚类的方法,比如利用了基于pairwise关联距离的流形学习方法UMAP将组学数据点嵌入二维空间,同时,通过团聚层级聚类方法将组学数据点团聚为多特征簇。有趣的是,这几类方法是已有的通用的无监督算法。感觉基于流形的这类聚类算法,能很好的在保度规的情况下达到降维的效果,提取有效特征,为下游任务服务。对于小样本而言,这类方法的效果似乎是比较不错的。那么一个想法是,能不能利用生成的方式,合成数据,然后learning的方式去构建这个embedding表示,再去做下游任务?有点想试试看,不过考虑到在18个基准数据集上做pk,多少有点心累
IF:16.600Q1 Nucleic acids research, 2022-05-06. DOI: 10.1093/nar/gkac010 PMID: 35100418
Abstract:
Omics-based biomedical learning frequently relies on data of high-dimensions (up to thousands) and low-sample sizes (dozens to hundreds), which challenges efficient deep learning (DL) algorithms, particularly for low-sample omics investigations. … >>>
Omics-based biomedical learning frequently relies on data of high-dimensions (up to thousands) and low-sample sizes (dozens to hundreds), which challenges efficient deep learning (DL) algorithms, particularly for low-sample omics investigations. Here, an unsupervised novel feature aggregation tool AggMap was developed to Aggregate and Map omics features into multi-channel 2D spatial-correlated image-like feature maps (Fmaps) based on their intrinsic correlations. AggMap exhibits strong feature reconstruction capabilities on a randomized benchmark dataset, outperforming existing methods. With AggMap multi-channel Fmaps as inputs, newly-developed multi-channel DL AggMapNet models outperformed the state-of-the-art machine learning models on 18 low-sample omics benchmark tasks. AggMapNet exhibited better robustness in learning noisy data and disease classification. The AggMapNet explainable module Simply-explainer identified key metabolites and proteins for COVID-19 detections and severity predictions. The unsupervised AggMap algorithm of good feature restructuring abilities combined with supervised explainable AggMapNet architecture establish a pipeline for enhanced learning and interpretability of low-sample omics data. <<<
翻译
909.
颜林林 (2022-06-28 07:39):
#paper doi:10.1101/2022.06.22.497216 bioRxiv, 2022, Intratumoral mregDC and CXCL13 T helper niches enable local differentiation of CD8 T cells following PD-1 blockade. 这篇文章来自西奈山伊坎医学院,其病例队列出自一项用于非小细胞肺癌(NSCLC)、肝细胞癌(HCC)和头颈部鳞癌(HNSCC)的手术前抗PD-1免疫药物(西米普利单抗,Cemiplimab)新辅助治疗的多中心II期临床试验(NCT03916627,该临床试验尚在进行中,始于2019年,预计2024年完成)。本文仅针对其中的肝细胞癌患者,通过对其新辅助治疗后手术取样组织,开展TCR测序、全外显子测序、单细胞转录组测序、多重免疫组化等实验,寻找与新辅助治疗疗效相关的特定细胞类群。通过免疫组化和免疫荧光方法,确认在肿瘤中确实富含T细胞并浸润其中的患者,仍有部分患者对PD-1药物并无响应。对比响应者与无响应者之间的细胞类群组成差异,找到一个细胞类群组合,成熟调节树突状细胞(mregDC,LAMP3+)与 CXCL13+ CD4+ 辅助性T细胞,它们与 PD-1高表达的CD8+ T细胞前体结合,形成三元组,促使后者形成 PD-1高表达的 GZMK+ 效应T细胞。而在没有这两类细胞的情况下,后者将形成耗竭型CD8+ T细胞。这导致了该新辅助治疗的不同预后结局。这项研究也为进一步揭示免疫治疗相关机制提供了新的证据。
Abstract:
Here, we leveraged a large neoadjuvant PD-1 blockade trial in patients with hepatocellular carcinoma (HCC) to search for correlates of response to immune checkpoint blockade (ICB) within T cell-rich tumors. … >>>
Here, we leveraged a large neoadjuvant PD-1 blockade trial in patients with hepatocellular carcinoma (HCC) to search for correlates of response to immune checkpoint blockade (ICB) within T cell-rich tumors. We show that ICB response correlated with the clonal expansion of intratumoral CXCL13+ CH25H+ IL-21+ PD-1+ CD4 T helper cells (CXCL13+ Th) and Granzyme K+ PD-1+ effector-like CD8 T cells, whereas terminally exhausted CD39hi TOXhi PD-1hi CD8 T cells dominated in non-responders. Strikingly, most T cell receptor (TCR) clones that expanded post-treatment were found in pre-treatment biopsies. Notably, PD-1+ TCF-1+ progenitor-like CD8 T cells were present in tumors of responders and non-responders and shared clones mainly with effector-like cells in responders or terminally differentiated cells in non-responders, suggesting that local CD8 T cell differentiation occurs upon ICB. We found that these progenitor CD8 T cells interact with CXCL13+ Th cells within cellular triads around dendritic cells enriched in maturation and regulatory molecules, or "mregDC". Receptor-ligand analysis revealed unique interactions within these triads that may promote the differentiation of progenitor CD8 T cells into effector-like cells upon ICB. These results suggest that discrete intratumoral niches that include mregDC and CXCL13+ Th cells control the differentiation of tumor-specific progenitor CD8 T cell clones in patients treated with ICB. <<<
翻译
910.
李翛然 (2022-06-27 18:03):
#paper doi:https://doi.org/10.1038/d41573-022-00052-y Hooking FSH as a potential target for Alzheimer disease 这篇是一个比较新的关于女性老年痴呆的靶点文章,可信度很高,最近我们也在构建这个疾病的小鼠模型。这里面采用的是敲除的手段复现的FSH receptor 作用通路。我们的思路是,通过构建一个基因突变,找到合适的抑制剂。 嘿嘿,这算不算核心技术透露?
911.
大象城南 (2022-06-27 10:28):
#paper doi: 10.1002/nbm.1579 NMR in Biomedicine, 2022, Mapping brain anatomical connectivity using white matter tractography. 人类大脑中的神经过程的整合是通过存在于不同神经中枢之间的相互连接来实现的。这些相互联系通过白质途径发生。白质纤维束追踪术是目前唯一一种在体内无创重建人脑解剖连接的技术。从神经束的局部方向估计白质通路的轨迹和终止。这些方向是通过测量脑内水扩散得到的。本文综述了利用脑内扩散测量来估计纤维方向的技术。描述了白质束摄影的方法,以及该技术目前的局限性,包括对图像噪声和部分体积的敏感性。讨论了白质束摄影在白质连接的地形表征、特定白质通路的分割以及相应的灰质功能单元等方面的应用。在此背景下,本文描述了白质束成像在绘制人脑功能系统和子系统及其相互关系方面的潜在影响。最后,讨论了白质束成像在脑疾病研究中的应用,包括肿瘤影响的脑纤维束定位和神经和神经精神疾病中连接通路受损的识别。
IF:2.700Q1 NMR in biomedicine, 2010-Aug. DOI: 10.1002/nbm.1579 PMID: 20886567
Abstract:
Integration of the neural processes in the human brain is realized through interconnections that exist between different neural centers. These interconnections take place through white matter pathways. White matter tractography … >>>
Integration of the neural processes in the human brain is realized through interconnections that exist between different neural centers. These interconnections take place through white matter pathways. White matter tractography is currently the only available technique for the reconstruction of the anatomical connectivity in the human brain noninvasively and in vivo. The trajectory and terminations of white matter pathways are estimated from local orientations of nerve bundles. These orientations are obtained using measurements of water diffusion in the brain. In this article, the techniques for estimating fiber directions from diffusion measurements in the human brain are reviewed. Methods of white matter tractography are described, together with the current limitations of the technique, including sensitivity to image noise and partial voluming. The applications of white matter tractography to the topographical characterization of the white matter connections and the segmentation of specific white matter pathways, and corresponding functional units of gray matter, are discussed. In this context, the potential impact of white matter tractography in mapping the functional systems and subsystems in the human brain, and their interrelations, is described. Finally, the applications of white matter tractography to the study of brain disorders, including fiber tract localization in brains affected by tumors and the identification of impaired connectivity routes in neurologic and neuropsychiatric diseases, are discussed. <<<
翻译
912.
lsj (2022-06-27 10:16):
#Virtual resection predicts surgical outcome for drug-resistant epilepsy. 该研究使用了线性系统稳定性理论与分析方法,提出了一种新的颅内EEG癫痫发作起始区的标记——神经易脆性,并通过多中心的91名患者数据对该方法进行了回顾性验证。
IF:10.600Q1 Brain : a journal of neurology, 2019-12-01. DOI: 10.1093/brain/awz303 PMID: 31599323 PMCID:PMC6885672
Abstract:
Patients with drug-resistant epilepsy often require surgery to become seizure-free. While laser ablation and implantable stimulation devices have lowered the morbidity of these procedures, seizure-free rates have not dramatically improved, … >>>
Patients with drug-resistant epilepsy often require surgery to become seizure-free. While laser ablation and implantable stimulation devices have lowered the morbidity of these procedures, seizure-free rates have not dramatically improved, particularly for patients without focal lesions. This is in part because it is often unclear where to intervene in these cases. To address this clinical need, several research groups have published methods to map epileptic networks but applying them to improve patient care remains a challenge. In this study we advance clinical translation of these methods by: (i) presenting and sharing a robust pipeline to rigorously quantify the boundaries of the resection zone and determining which intracranial EEG electrodes lie within it; (ii) validating a brain network model on a retrospective cohort of 28 patients with drug-resistant epilepsy implanted with intracranial electrodes prior to surgical resection; and (iii) sharing all neuroimaging, annotated electrophysiology, and clinical metadata to facilitate future collaboration. Our network methods accurately forecast whether patients are likely to benefit from surgical intervention based on synchronizability of intracranial EEG (area under the receiver operating characteristic curve of 0.89) and provide novel information that traditional electrographic features do not. We further report that removing synchronizing brain regions is associated with improved clinical outcome, and postulate that sparing desynchronizing regions may further be beneficial. Our findings suggest that data-driven network-based methods can identify patients likely to benefit from resective or ablative therapy, and perhaps prevent invasive interventions in those unlikely to do so. <<<
翻译
913.
尹志 (2022-06-27 08:22):
#paper doi:10.1016/j.tics.2021.11.008 Trends in Cognitive Sciences, Vol 26, Issue 2, 2022, Next-generation deep learning based on simulators and synthetic data. 目前的主流的深度学习应用主要利用了监督学习的技术,但这需要大量的有标注的数据,考虑到获取大量有标注数据的困难(经济上、效率上),这就成为了深度学习发展的瓶颈。为了解决这个问题,一个有可能的解决方案是充分利用合成数据。本文就综述了这一主题的情况。文章将合成数据的来源分为了三种类型,分别是渲染方式下产生的,简单的说就是在各类建模渲染过程中产生的;各类生成模型产生的;融合模型产生的。再具体一点,第一类是模拟建模过程产生的,其具有较好的物理背景和流程;第二类是各类具有统计背景的生成模型基于对数据的分布进行的估计产生的;第三类则是将不同的domain的数据进行融合产生的,比如将前景域和背景域做各种融合。当然,考虑到合成数据和真实数据还存在很多gap,因此类似域适配这样的技术也在不断发展,使得合成数据更好的被使用。除此之外,这些合成数据的生成方案,大量借鉴了人类自然学习的模式,因此也促成了双向发展的趋势。即,数据合成的方案上不断借鉴自然学习的各种特点,而数据合成的研究也不断反向推动生物系统的各种性质的理解。最后,文章总结了利用合成数据进行科学探索、物理学研究、多模态学习等领域的特点及相关挑战,这一块的内容非常精炼,对相关主题感兴趣的小伙伴可以通过参考文献进行扩展,非常有价值的研究线索。
Abstract:
Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple … >>>
Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple concepts. Thus, the main bottleneck is often access to supervised data. Here, we highlight a trend in a potential solution to this challenge: synthetic data. Synthetic data are becoming accessible due to progress in rendering pipelines, generative adversarial models, and fusion models. Moreover, advancements in domain adaptation techniques help close the statistical gap between synthetic and real data. Paradoxically, this artificial solution is also likely to enable more natural learning, as seen in biological systems, including continual, multimodal, and embodied learning. Complementary to this, simulators and deep neural networks (DNNs) will also have a critical role in providing insight into the cognitive and neural functioning of biological systems. We also review the strengths of, and opportunities and novel challenges associated with, synthetic data. <<<
翻译
914.
颜林林 (2022-06-27 00:24):
#paper doi:10.3390/diagnostics12061493 Diagnostics, 2022, MixPatch: A New Method for Training Histopathology Image Classifiers. 病理图像分析中,由于原始全片数据量太大(通常为5万x5万像素),很难直接丢入DNN模型,故通常会进行切分,形成大量图块(patch),逐一进行分析(训练或预测)。对于每个图块,一般会由病理医生进行注释,确定其临床特征(如是否恶性肿瘤区域)。该临床特征一般是“是或否”的二分状态。然而,事实上很多分块会同时包含良性或恶性的不同类型区域,这种“不确定”的图块,会造成模型的误判和性能损失。本文的研究,采取最小图块(128x128像素,被病理医生认为最小可识别区域),以便给出“干净”的金标准数据集,并在此基础上,合并相邻最小图块(一般9个或16个,即3x3或4x4),得到“混合的图块(mix patch)”,并根据组合前原始信息,给出对该“混合图块”的结果的可信度估计。这其实是个模糊集合的理念。而通过这般操作,使得病理分析的性能得到了提升,且在对全片水平(slide level)进行的预测中也取得了更好的结果。
Abstract:
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious … >>>
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis. <<<
翻译
915.
颜林林 (2022-06-26 22:13):
#paper doi:10.1371/journal.pcbi.1009730 PLOS Computational Biology, 2022, Improved transcriptome assembly using a hybrid of long and short reads with StringTie. 这篇文章来自Johns Hopkins,开发了一个能够混合使用长读长及短读长测序数据进行转录组拼装的工具。高通量测序数据中,短读长平台的准确性高,但读长较短,难以覆盖完整转录本,而长读长平台虽然可以跨越多个外显子,帮助确定转录本剪切方式,但由于碱基准确度相对较差,因而也容易在比对时造成错误,影响转录本的确定。本文的工具,展示了由于测序错误导致的“嘈杂”比对,以及由此导致的搜索空间大幅增加。通过使用图论中的最大流量问题的解法,以及在“嘈杂”比对局部使用更准确的短读长数据,帮助确定正确的剪切位点,从而实现综合两种平台(长读长与短读长)的优势,且运算速度也并不弱于以往使用单一数据的工具算法。为评估此工具,本文除了使用模拟数据外,同时也选择了拟南芥、小鼠和人的多套真实数据集,在组装精读和输出的可正确注释的转录本等方面,都表现出符合预期的更好成绩。
Abstract:
Short-read RNA sequencing and long-read RNA sequencing each have their strengths and weaknesses for transcriptome assembly. While short reads are highly accurate, they are rarely able to span multiple exons. … >>>
Short-read RNA sequencing and long-read RNA sequencing each have their strengths and weaknesses for transcriptome assembly. While short reads are highly accurate, they are rarely able to span multiple exons. Long-read technology can capture full-length transcripts, but its relatively high error rate often leads to mis-identified splice sites. Here we present a new release of StringTie that performs hybrid-read assembly. By taking advantage of the strengths of both long and short reads, hybrid-read assembly with StringTie is more accurate than long-read only or short-read only assembly, and on some datasets it can more than double the number of correctly assembled transcripts, while obtaining substantially higher precision than the long-read data assembly alone. Here we demonstrate the improved accuracy on simulated data and real data from Arabidopsis thaliana, Mus musculus, and human. We also show that hybrid-read assembly is more accurate than correcting long reads prior to assembly while also being substantially faster. StringTie is freely available as open source software at https://github.com/gpertea/stringtie. <<<
翻译
916.
颜林林 (2022-06-25 20:26):
#paper doi:10.3390/s22124409 Sensors, 2022, Deep Neural Networks Applied to Stock Market Sentiment Analysis. 这篇来自葡萄牙的关于深度学习技术应用的论文,被发现和推送自PubMed(PMID:35746192)。文章主要介绍了如何使用深度神经网络,从社交网站(Twitter、Reddit等)的文字内容,推断其情绪分类(积极或消极),并利用此情绪结果,进行模拟投资,以评估其投资收益率。文章内容算不上有太多创新价值,不过其认真介绍DL技术原理、实现和评估过程,倒是有点像一篇教程。反而是关于股市及投资的内容,显得有些割裂,像是强行补充。因为其深度模型的性能评估,也还是仅仅针对情绪分类进行的。作者在文末展望之处还提到,后续打算引入数据流技术(data streaming technology),使该分析过程能够实时进行,倒或许会指出更多合适的新应用场景。
Abstract:
The volume of data is growing exponentially and becoming more valuable to organizations that collect it, from e-commerce data, shipping, audio and video logs, text messages, internet search queries, stock … >>>
The volume of data is growing exponentially and becoming more valuable to organizations that collect it, from e-commerce data, shipping, audio and video logs, text messages, internet search queries, stock market activity, financial transactions, the Internet of Things, and various other sources. The major challenges are related with the way to extract insights from such a rich data environment and whether Deep Learning can be successful with Big Data. To get some insight on these topics, social network data are employed as a case study on how sentiments can affect decisions in stock market environments. In this paper, we propose a generalized Deep Learning-based classification framework for Stock Market Sentiment Analysis. This work comprises the study, the development, and implementation of an automatic classification system based on Deep Learning and the validation of its adequacy and efficiency in any scenario, particularly Stock Market Sentiment Analysis. Distinct datasets and several Deep Learning approaches with different layers and embedded techniques are used, and their performances are evaluated. These developments show how Deep Learning reacts to distinct contexts. The results also give context on how different techniques with different parameter combinations react to certain types of data. Convolution obtained the best results when dealing with complex data inputs, and long short-term layers kept a memory of data, allowing inputs which are not as common to still be considered for decisions. The models that resulted from Stock Market Sentiment Analysis datasets were applied with some success to real-life problems. The best models reached accuracies of 73% in training and 69% in certain test datasets. In a simulation, a model was able to provide a Return on Investment of 4.4%. The results contribute to understanding how to process Big Data efficiently using Deep Learning and specialized hardware techniques. <<<
翻译
917.
张浩彬 (2022-06-25 15:38):
#paper doi:10.1007/s11356-021-17442-1,A systematic literature review of deep learning neural network for time series air quality forecasting 21年关于深度学习用于大气污染物预测的文章。算是很全面地从深度学习的角度总结了各种大气污染预测的方法,主要包括单模型、混合模型、时空网络以及结合序列分解进行深度学习预测等四个方面,并对每个方面的相关论文进行了讨论总结,相对比较详尽。美中不足的是,针对这四个方面的相互比较,作者的笔墨较少。
Abstract:
Rapid progress of industrial development, urbanization and traffic has caused air quality reduction that negatively affects human health and environmental sustainability, especially among developed countries. Numerous studies on the development … >>>
Rapid progress of industrial development, urbanization and traffic has caused air quality reduction that negatively affects human health and environmental sustainability, especially among developed countries. Numerous studies on the development of air quality forecasting model using machine learning have been conducted to control air pollution. As such, there are significant numbers of reviews on the application of machine learning in air quality forecasting. Shallow architectures of machine learning exhibit several limitations and yield lower forecasting accuracy than deep learning architecture. Deep learning is a new technology in computational intelligence; thus, its application in air quality forecasting is still limited. This study aims to investigate the deep learning applications in time series air quality forecasting. Owing to this, literature search is conducted thoroughly from all scientific databases to avoid unnecessary clutter. This study summarizes and discusses different types of deep learning algorithms applied in air quality forecasting, including the theoretical backgrounds, hyperparameters, applications and limitations. Hybrid deep learning with data decomposition, optimization algorithm and spatiotemporal models are also presented to highlight those techniques' effectiveness in tackling the drawbacks of individual deep learning models. It is clearly stated that hybrid deep learning was able to forecast future air quality with higher accuracy than individual models. At the end of the study, some possible research directions are suggested for future model development. The main objective of this review study is to provide a comprehensive literature summary of deep learning applications in time series air quality forecasting that may benefit interested researchers for subsequent research. <<<
翻译
918.
白义民 (2022-06-25 14:36):
#paper 邓晓芒《语言的形上学原理》,语言是一种日用工具,关于语言哲学的探讨有助于把握好这个工具。本文通过指出语言的两个负面性质:自否定和自欺,从语言游戏的视角,来阐释概念现象世界的认知局限性,以及绝对真理胜义谛的不可言说;进一步从语言的诗学性质:言不及义,意在言外——来表明对认知局限的超越,对绝对真理意象世界的意义领略。
Abstract:
在语言的形而上学中,最值得关注的有三大基本原理:一、语言的自否定本质,语言本质上是辩证法的,它的“是”即蕴含着“不是”,任何“真话”都隐含着“谎言”,否则不成其为语言;二、语言的自欺功能,有意识的自欺或假扮游戏是语言的灵魂和生命,它基于人类自我意识的自欺结构,同时又给这种结构提供了现实的确证;三、语言的修辞学或诗学属性,一切语言都由诗性而发生,这也是语言中的语法和逻辑功能的起源。我所设想的“语言学之后”的形而上学所要探讨的正是语言的诗性功能和逻辑功能的关系,双方不仅是“对立统一”的关系,而且处于“自否定”的辩证进展中,这构成了“语言学之后”的最基本的原理。 >>>
在语言的形而上学中,最值得关注的有三大基本原理:一、语言的自否定本质,语言本质上是辩证法的,它的“是”即蕴含着“不是”,任何“真话”都隐含着“谎言”,否则不成其为语言;二、语言的自欺功能,有意识的自欺或假扮游戏是语言的灵魂和生命,它基于人类自我意识的自欺结构,同时又给这种结构提供了现实的确证;三、语言的修辞学或诗学属性,一切语言都由诗性而发生,这也是语言中的语法和逻辑功能的起源。我所设想的“语言学之后”的形而上学所要探讨的正是语言的诗性功能和逻辑功能的关系,双方不仅是“对立统一”的关系,而且处于“自否定”的辩证进展中,这构成了“语言学之后”的最基本的原理。 <<<
翻译
919.
颜林林 (2022-06-24 21:32):
#paper doi:10.1038/s41587-022-01294-2 Nature Biotechnology, 2022, The clinical progress of mRNA vaccines and immunotherapies. 这是一篇关于mRNA疫苗的长篇综述。使用mRNA作为载体开发疫苗的概念,始于1990年,它通过借用接种者身体内的蛋白质翻译机制来产生靶蛋白,而非直接注射(灭活或减活)病原体或靶蛋白本身。这种方式带来一系列优点,诸如设计简便、固有免疫原性、可快速量产等。当然,它也存在诸如稳定性差、疫苗在体内递送至目标位置困难等缺点或挑战。在新冠疫情爆发以来的这三年里,借着大量资金投入增加、紧急使用授权等机会,mRNA疫苗的研发及投产使用得到了极大加速。本文对这些发展,包括给药递送方法,针对传染病的疫苗研发、使用及优化,针对癌症治疗的疫苗方法,mRNA疫苗在蛋白质和细胞免疫治疗中的使用等,都做了比较详细的综述介绍,并据此讨论了当前存在的问题和未来研发方向。通篇读下来,能对mRNA疫苗及其技术路线形成比较深入的了解,也确实能体会到这是个潜力巨大、值得探索和继续研发的重要技术体系。
IF:33.100Q1 Nature biotechnology, 2022-06. DOI: 10.1038/s41587-022-01294-2 PMID: 35534554
Abstract:
The emergency use authorizations (EUAs) of two mRNA-based severe acute respiratory syndrome coronavirus (SARS-CoV)-2 vaccines approximately 11 months after publication of the viral sequence highlights the transformative potential of this … >>>
The emergency use authorizations (EUAs) of two mRNA-based severe acute respiratory syndrome coronavirus (SARS-CoV)-2 vaccines approximately 11 months after publication of the viral sequence highlights the transformative potential of this nucleic acid technology. Most clinical applications of mRNA to date have focused on vaccines for infectious disease and cancer for which low doses, low protein expression and local delivery can be effective because of the inherent immunostimulatory properties of some mRNA species and formulations. In addition, work on mRNA-encoded protein or cellular immunotherapies has also begun, for which minimal immune stimulation, high protein expression in target cells and tissues, and the need for repeated administration have led to additional manufacturing and formulation challenges for clinical translation. Building on this momentum, the past year has seen clinical progress with second-generation coronavirus disease 2019 (COVID-19) vaccines, Omicron-specific boosters and vaccines against seasonal influenza, Epstein-Barr virus, human immunodeficiency virus (HIV) and cancer. Here we review the clinical progress of mRNA therapy as well as provide an overview and future outlook of the transformative technology behind these mRNA-based drugs. <<<
翻译
920.
张德祥 (2022-06-23 09:27):
#paper https://doi.org/10.1016/j.biosystems.2022.104714 Neurons as hierarchies of quantum reference frames Author links open overlay panel 神经元的概念和模型已经落后于经验数据几十年了,现在的神经网络的启发概念是几十年前的模型,人工智能现在很需要从生物高效的神经元模型获得启发,这篇论文用量子信息论的工具扩展现在的神经元模型,这种表示法中量子参考系扮演了层次主动推理的模型,生物计算是否跟量子有关还存在很多争议,这篇论文也列举了部分证据数据。期待生物启发的高效神经元模型的出现。
Abstract:
Conceptual and mathematical models of neurons have lagged behind empirical understanding for decades. Here we extend previous work in modeling biological systems with fully scale-independent quantum information-theoretic tools to develop … >>>
Conceptual and mathematical models of neurons have lagged behind empirical understanding for decades. Here we extend previous work in modeling biological systems with fully scale-independent quantum information-theoretic tools to develop a uniform, scalable representation of synapses, dendritic and axonal processes, neurons, and local networks of neurons. In this representation, hierarchies of quantum reference frames act as hierarchical active-inference systems. The resulting model enables specific predictions of correlations between synaptic activity, dendritic remodeling, and trophic reward. We summarize how the model may be generalized to nonneural cells and tissues in developmental and regenerative contexts. <<<
翻译
回到顶部