来自用户 尹志 的文献。
当前共找到 37 篇文献分享,本页显示第 21 - 37 篇。
21.
尹志
(2023-03-31 00:12):
#paper https://doi.org/10.1038/s41586-023-05870-7. Nature, 2023, Programmable protein delivery with a
bacterial contractile injection system。这是今年张锋组的一篇新文章。文章介绍了一种叫做外胞质收缩注射系统(eCISs)的纳米机器,它们可以被重新编程以针对人类细胞并传递各种蛋白质负载,包括Cas9、碱基编辑器和毒素。这些系统可以用于基因治疗、癌症治疗和生物控制等领域。还讨论了利用收缩注射系统(CIS)作为蛋白质传递和基因编辑的潜在工具以及它们在生物技术和医学中的应用。基本都是实验,方法部分简直大开眼界,琳琅满目,基本看不懂;但看结论还是觉得挺有前瞻性的工作,而且使用了AF技术作为structure-guided engineering,这个很引起我的兴趣。总之,先浅浅仰慕读一下
Abstract:
Endosymbiotic bacteria have evolved intricate delivery systems that enable these organisms to interface with host biology. One example, the extracellular contractile injection systems (eCISs), are syringe-like macromolecular complexes that inject …
>>>
Endosymbiotic bacteria have evolved intricate delivery systems that enable these organisms to interface with host biology. One example, the extracellular contractile injection systems (eCISs), are syringe-like macromolecular complexes that inject protein payloads into eukaryotic cells by driving a spike through the cellular membrane. Recently, eCISs have been found to target mouse cells, raising the possibility that these systems could be harnessed for therapeutic protein delivery. However, whether eCISs can function in human cells remains unknown, and the mechanism by which these systems recognize target cells is poorly understood. Here we show that target selection by the Photorhabdus virulence cassette (PVC)-an eCIS from the entomopathogenic bacterium Photorhabdus asymbiotica-is mediated by specific recognition of a target receptor by a distal binding element of the PVC tail fibre. Furthermore, using in silico structure-guided engineering of the tail fibre, we show that PVCs can be reprogrammed to target organisms not natively targeted by these systems-including human cells and mice-with efficiencies approaching 100%. Finally, we show that PVCs can load diverse protein payloads, including Cas9, base editors and toxins, and can functionally deliver them into human cells. Our results demonstrate that PVCs are programmable protein delivery devices with possible applications in gene therapy, cancer therapy and biocontrol.
<<<
翻译
22.
尹志
(2023-02-28 21:51):
#paper https://doi.org/10.48550/arXiv.2203.17003 ICML, 2022, Equivariant Diffusion for Molecule Generation in 3D。扩散模型在各个领域发展极其迅速。除了图形图像,其触角已经扩展到生物制药、材料科学领域。本文就是一篇使用扩散模型进行3D分子生成的文章。作者提出了一种等变扩散模型,其中的等变网络能够很好的同时处理原子坐标这样的连续变量和原子类型这样的离散变量。该工作在QM9和GEOM两个典型的数据集上取得了sota的性能,是将等变性引入扩散模型的开篇工作之一。
arXiv,
2022.
DOI: 10.48550/arXiv.2203.17003
Abstract:
This work introduces a diffusion model for molecule generation in 3D that is equivariant to Euclidean transformations. Our E(3) Equivariant Diffusion Model (EDM) learns to denoise a diffusion process with …
>>>
This work introduces a diffusion model for molecule generation in 3D that is equivariant to Euclidean transformations. Our E(3) Equivariant Diffusion Model (EDM) learns to denoise a diffusion process with an equivariant network that jointly operates on both continuous (atom coordinates) and categorical features (atom types). In addition, we provide a probabilistic analysis which admits likelihood computation of molecules using our model. Experimentally, the proposed method significantly outperforms previous 3D molecular generative methods regarding the quality of generated samples and efficiency at training time.
<<<
翻译
23.
尹志
(2023-01-31 20:59):
#paper Diffusion Models: A Comprehensive Survey of Methods and Applications, https://doi.org/10.48550/arXiv.2209.00796. 这篇综述对当前非常热门的扩散模型进行了详细的介绍与梳理。文章将当前的扩散模型总结为三类主要模型:DDPMs、SGMs、score SDEs,三类模型逐级一般化,可处理更广泛的问题。除了对三类主流扩散模型进行了详细的讲解,对比,对其相关改进工作进行了梳理,文章还探讨了扩散模型与其它主流的生成模型的联系与区别。文章在最后列举了扩散模型目前在各个领域的应用。考虑到扩散模型受物理概念启发,非常看好其后续结合数学物理的更多推广和应用,比如最近顾险峰老师就在文章中指出基于最优传输的可能改进,这确实是非常有意思的想法和主题。
arXiv,
2022.
DOI: 10.48550/arXiv.2209.00796
Abstract:
Diffusion models have emerged as a powerful new family of deep generative models with record-breaking performance in many applications, including image synthesis, video generation, and molecule design. In this survey, …
>>>
Diffusion models have emerged as a powerful new family of deep generative models with record-breaking performance in many applications, including image synthesis, video generation, and molecule design. In this survey, we provide an overview of the rapidly expanding body of work on diffusion models, categorizing the research into three key areas: efficient sampling, improved likelihood estimation, and handling data with special structures. We also discuss the potential for combining diffusion models with other generative models for enhanced results. We further review the wide-ranging applications of diffusion models in fields spanning from computer vision, natural language processing, temporal data modeling, to interdisciplinary applications in other scientific disciplines. This survey aims to provide a contextualized, in-depth look at the state of diffusion models, identifying the key areas of focus and pointing to potential areas for further exploration. Github: this https URL.
<<<
翻译
24.
尹志
(2022-12-31 14:48):
#paper doi: https://doi.org/10.48550/arXiv.2210.11250,Structure-based drug design with geometric deep learning.
这是一篇比较新的关于药物设计和深度学习的短小的综述。主要探讨了在结构化药物设计领域的若干重要子任务上,几何深度学习技术是如何发挥其作用的。考虑到结构化药物设计主要使用大分子(比如蛋白质、核酸)的三维几何信息来识别合适的配体,几何深度学习作为一种将几何对称性引入深度学习的技术是非常有潜力的工具。文章主要探讨了1)分子性质预测(结合亲和度、蛋白质功能、位置分数);2)结合位点和结合面预测(小分子结合位点和蛋白-蛋白结合面);3)结合位置生成和分子对接(配体-蛋白和蛋白-蛋白对接);4)基于结构的小分子配体de novo 设计几个子任务。从分子的常见表征谈起,再讨论结构化药物设计中存在的对称性问题,然后通过四个小节,分别讨论了几何深度学习对四个子任务的研究现状。是基于AI的结构化药物设计领域的一篇很不错的guideline。
arXiv,
2022.
DOI: 10.48550/arXiv.2210.11250
Abstract:
Structure-based drug design uses three-dimensional geometric information of macromolecules, such as proteins or nucleic acids, to identify suitable ligands. Geometric deep learning, an emerging concept of neural-network-based machine learning, has …
>>>
Structure-based drug design uses three-dimensional geometric information of macromolecules, such as proteins or nucleic acids, to identify suitable ligands. Geometric deep learning, an emerging concept of neural-network-based machine learning, has been applied to macromolecular structures. This review provides an overview of the recent applications of geometric deep learning in bioorganic and medicinal chemistry, highlighting its potential for structure-based drug discovery and design. Emphasis is placed on molecular property prediction, ligand binding site and pose prediction, and structure-based de novo molecular design. The current challenges and opportunities are highlighted, and a forecast of the future of geometric deep learning for drug discovery is presented.
<<<
翻译
25.
尹志
(2022-11-28 21:20):
#paper https://doi.org/10.1093/bib/bbab344 Briefings in Bioinformatics, 22(6), 2021, 1-11:Molecular design in drug discovery: a comprehensive
review of deep generative models. 一篇基于深度生成模型的药物发现中的分子设计的综述。看年份是比较新的,但其实已经完全不sota了啊,哈哈哈哈哈。但是作为科普是很好的。文章介绍了基于深度生成模型的分子设计这个在药物发现领域的重要主题。综述了两种主流的分子表示:SMILES-based和图based。然后在每个表示下,分别介绍了基于VAE,GAN,RNN,Flow几种深度生成模型的分子设计。同时也介绍了目前市面上主要的de novo的分子设计的数据集。文章的结尾还从数据、模型、评价指标的角度讨论了分子设计目前存在的挑战。不过作者在写这篇综述的时候,可能是万万没想到今年diffusion model会在生成模型领域大杀四方吧,哈哈哈哈
Abstract:
Deep generative models have been an upsurge in the deep learning community since they were proposed. These models are designed for generating new synthetic data including images, videos and texts …
>>>
Deep generative models have been an upsurge in the deep learning community since they were proposed. These models are designed for generating new synthetic data including images, videos and texts by fitting the data approximate distributions. In the last few years, deep generative models have shown superior performance in drug discovery especially de novo molecular design. In this study, deep generative models are reviewed to witness the recent advances of de novo molecular design for drug discovery. In addition, we divide those models into two categories based on molecular representations in silico. Then these two classical types of models are reported in detail and discussed about both pros and cons. We also indicate the current challenges in deep generative models for de novo molecular design. De novo molecular design automatically is promising but a long road to be explored.
<<<
翻译
26.
尹志
(2022-10-27 20:44):
#paper doi: https://doi.org/10.48550/arXiv.1708.02002,Focal Loss for Dense Object Detection. (ICCV 2017) 这是一篇目标检测领域的经典的论文,我们知道,一直以来,目标检测领域有两类模型,单阶段和二阶段检测模型。前者以yolo和ssd为主,后者基本上是R-CNN派生出来的。一般而言,单阶段的目标检测算法速度快于二阶段检测算法,而准确性上弱于二阶段算法。原理上,二阶段检测算法基本是第一步生成一堆的候选目标框,然后第二步精准分类这些候选目标框;而单阶段检测算法是直接生成一堆(大量)的检测框。那么是不是提出一个单阶段的检测算法,速度也快,准确性也可以媲美二阶段算法呢?文章认为,单阶段在准确性上目前比不过二阶段算法的原因,是因为存在类别不平衡的问题。在二阶段算法中,我们通过第一阶段已经过滤了大多数的背景样本了,但单阶段算法一次生成的候选框非常密集,其中前景-背景类别的不平衡就非常严重,这也导致准确率上不去。因此作者提出,我们在常规的交叉熵里引入一个缩放因子,这个缩放因子在训练中能够自动对容易的样本进行降权重,从而让模型能更好的处理难例。这就是大名鼎鼎的focal loss。基于focal loss,作者设计了一个单阶段目标检测网络:RetinaNet, 通过实验对比,RetinaNet不论在速度上还是准确性上,都获得了SOTA的性能,在COCO数据集上获得了39.1的AP(这在当年是非常优秀的成绩)
arXiv,
2018.
DOI: 10.48550/arXiv.1708.02002
Abstract:
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In …
>>>
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL.
<<<
翻译
27.
尹志
(2022-09-30 11:06):
#paper doi:10.48550/arXiv.1907.10830 U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation, ICLR 2020. 这又是一篇图像翻译的文章,还是在网络结构上做了有效的改进。作者通过提出一个新的注意力模块和一种新的归一化函数实现无监督的图像翻译工作。作者提出的注意力模块对于图像的几何形变能够做出很好的处理,这也让文章的架构对于很多艺术风格的变化处理具有优越的效果。
arXiv,
2019.
DOI: 10.48550/arXiv.1907.10830
Abstract:
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our …
>>>
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at this https URL or this https URL.
<<<
翻译
28.
尹志
(2022-08-31 09:46):
#paper doi:10.1089/genbio.2022.0017 GEN Biotechnology, 2022, Deep Learning Concepts and Applications for Synthetic Biology. 这是一篇2022年新出的深度学习与合成生物学的综述,或者我更愿意称之为元综述。文章对深度学习在合成生物学领域的应用做了简要介绍。对合成生物学中可用于深度学习框架的数据做了分类,对深度学习目前常用的结构也做了介绍。最值得一看的是深度学习在合成生物学领域的的应用:比如生物组成的设计与建模、使用生成模型方法合成新的组成、结构预测、视觉应用等等,对于提纲挈领非常有帮助。但是内容不是很具体,这也是我称之为元综述的原因。在每个具体的小节,作者在基本概念的科普之后,一般会指向几篇这个领域更合适的综述。因此,带着自己的方向和问题去看这篇元综述,逐步挖下去,应该会有很好的阅读体验。
Abstract:
Synthetic biology has a natural synergy with deep learning. It can be used to generate large data sets to train models, for example by using DNA synthesis, and deep learning …
>>>
Synthetic biology has a natural synergy with deep learning. It can be used to generate large data sets to train models, for example by using DNA synthesis, and deep learning models can be used to inform design, such as by generating novel parts or suggesting optimal experiments to conduct. Recently, research at the interface of engineering biology and deep learning has highlighted this potential through successes including the design of novel biological parts, protein structure prediction, automated analysis of microscopy data, optimal experimental design, and biomolecular implementations of artificial neural networks. In this review, we present an overview of synthetic biology-relevant classes of data and deep learning architectures. We also highlight emerging studies in synthetic biology that capitalize on deep learning to enable novel understanding and design, and discuss challenges and future opportunities in this space.
<<<
翻译
29.
尹志
(2022-07-30 22:41):
#paper https://doi.org/10.48550/arXiv.2205.01529 Masked Generative Distillation ECCV 2022. 这是一篇知识蒸馏的文章,通过类似对比学习的方式去生成特征,从而实现蒸馏。我们知道,知识蒸馏作为一个通用的技巧,已经被用于各类
机器学习任务,在视觉上比如分类、分割、检测等。一般来说蒸馏算法通过使得学生模仿老师特征去提高学生特征的表征能力。但这篇文章提出,学生不用去模仿老师的特征了,干脆自己生成特征好了,即通过对学生特征进行随机遮盖,然后用学生的部分特征去生成老师特征。这样学生特征就具有了较强的表征能力。这个想法很有意思,我打个比方(可能不太合适),就像本来是要学习老师的一举一动,但是现在这个老师不太出现,你不方便直接模仿,那就学生自己通过监督,去盲猜老师的特征什么样的,这样多猜几次,每次都能猜准的时候,说明对这位老师已经很熟悉了,然后说明学生的表征能力就比较强了。通过这个方式,作者在图像分类、目标检测、语义分割、实例分割等多种任务上,在不同数据集不同model的基础上,做了大量实验,发现性能都得到了提升(基本上都有2-3个点的提升,具体数值见文献)。
arXiv,
2022.
DOI: 10.48550/arXiv.2205.01529
Abstract:
Knowledge distillation has been applied to various tasks successfully. The current distillation algorithm usually improves students' performance by imitating the output of the teacher. This paper shows that teachers can …
>>>
Knowledge distillation has been applied to various tasks successfully. The current distillation algorithm usually improves students' performance by imitating the output of the teacher. This paper shows that teachers can also improve students' representation power by guiding students' feature recovery. From this point of view, we propose Masked Generative Distillation (MGD), which is simple: we mask random pixels of the student's feature and force it to generate the teacher's full feature through a simple block. MGD is a truly general feature-based distillation method, which can be utilized on various tasks, including image classification, object detection, semantic segmentation and instance segmentation. We experiment on different models with extensive datasets and the results show that all the students achieve excellent improvements. Notably, we boost ResNet-18 from 69.90% to 71.69% ImageNet top-1 accuracy, RetinaNet with ResNet-50 backbone from 37.4 to 41.0 Boundingbox mAP, SOLO based on ResNet-50 from 33.1 to 36.2 Mask mAP and DeepLabV3 based on ResNet-18 from 73.20 to 76.02 mIoU. Our codes are available at this https URL.
<<<
翻译
30.
尹志
(2022-06-28 22:16):
#paper doi:10.1093/nar/gkac010 Nucleic Acids Research, Volume 50, Issue 8, 6 May 2022, AggMapNet: enhanced and explainable low-sample omics deep learning with feature-aggregated multi-channel networks 基于组学的生物医学数据的学习,通常依赖于高维特征及小样本,而这对于目前的深度学习主流方法而言则是一项挑战。本文首先提出了一种无监督的特征聚合技术AggMap,其作用是基于组学特征的内在固有关联,将组学特征聚合并映射为多通道的二维空间关联特征图(Fmaps)。AggMap在基准数据集上,相较于现有的算法,具有很强的特征重构能力;接着,文章利用AggMap的多通道Fmap作为输入,通过构建多通道深度学习模型AggMapNet,在18个小样本组学基准数据集上取得超过SOTA的性能。而且AggMapNet在噪声数据和疾病分类的问题上展现了良好的鲁棒性。另外,在可解释性方面,AggMapNet的的解释性模块Simply-explainer可以识别COVID19的检测和严重性预测的关键代谢分子和蛋白。
总体上看,文章提出了一个组学小样本数据建模的pipeline:通过无监督算法AggMap的特征重构能力+基于监督信息的可解释的AggMapNet深度学习模型。
几点启发:这个工作将小样本组学数据通过一个pipeline完成学习,我们可以将这个pipeline理解为特征重表示(AggMap)+DL网络(AggMapNet)。我们看到,这个过程不是端到端的,而是充分利用了对特征的重表示,挖掘新的特征空间的表征能力。有点返璞归真的意思,但又考虑到高维性质,不容易手工构造特征,因此在特征部分,用到了很多无监督聚类的方法,比如利用了基于pairwise关联距离的流形学习方法UMAP将组学数据点嵌入二维空间,同时,通过团聚层级聚类方法将组学数据点团聚为多特征簇。有趣的是,这几类方法是已有的通用的无监督算法。感觉基于流形的这类聚类算法,能很好的在保度规的情况下达到降维的效果,提取有效特征,为下游任务服务。对于小样本而言,这类方法的效果似乎是比较不错的。那么一个想法是,能不能利用生成的方式,合成数据,然后learning的方式去构建这个embedding表示,再去做下游任务?有点想试试看,不过考虑到在18个基准数据集上做pk,多少有点心累
Abstract:
Omics-based biomedical learning frequently relies on data of high-dimensions (up to thousands) and low-sample sizes (dozens to hundreds), which challenges efficient deep learning (DL) algorithms, particularly for low-sample omics investigations. …
>>>
Omics-based biomedical learning frequently relies on data of high-dimensions (up to thousands) and low-sample sizes (dozens to hundreds), which challenges efficient deep learning (DL) algorithms, particularly for low-sample omics investigations. Here, an unsupervised novel feature aggregation tool AggMap was developed to Aggregate and Map omics features into multi-channel 2D spatial-correlated image-like feature maps (Fmaps) based on their intrinsic correlations. AggMap exhibits strong feature reconstruction capabilities on a randomized benchmark dataset, outperforming existing methods. With AggMap multi-channel Fmaps as inputs, newly-developed multi-channel DL AggMapNet models outperformed the state-of-the-art machine learning models on 18 low-sample omics benchmark tasks. AggMapNet exhibited better robustness in learning noisy data and disease classification. The AggMapNet explainable module Simply-explainer identified key metabolites and proteins for COVID-19 detections and severity predictions. The unsupervised AggMap algorithm of good feature restructuring abilities combined with supervised explainable AggMapNet architecture establish a pipeline for enhanced learning and interpretability of low-sample omics data.
<<<
翻译
31.
尹志
(2022-06-27 08:22):
#paper doi:10.1016/j.tics.2021.11.008 Trends in Cognitive Sciences, Vol 26, Issue 2, 2022, Next-generation deep learning based on simulators and synthetic data. 目前的主流的深度学习应用主要利用了监督学习的技术,但这需要大量的有标注的数据,考虑到获取大量有标注数据的困难(经济上、效率上),这就成为了深度学习发展的瓶颈。为了解决这个问题,一个有可能的解决方案是充分利用合成数据。本文就综述了这一主题的情况。文章将合成数据的来源分为了三种类型,分别是渲染方式下产生的,简单的说就是在各类建模渲染过程中产生的;各类生成模型产生的;融合模型产生的。再具体一点,第一类是模拟建模过程产生的,其具有较好的物理背景和流程;第二类是各类具有统计背景的生成模型基于对数据的分布进行的估计产生的;第三类则是将不同的domain的数据进行融合产生的,比如将前景域和背景域做各种融合。当然,考虑到合成数据和真实数据还存在很多gap,因此类似域适配这样的技术也在不断发展,使得合成数据更好的被使用。除此之外,这些合成数据的生成方案,大量借鉴了人类自然学习的模式,因此也促成了双向发展的趋势。即,数据合成的方案上不断借鉴自然学习的各种特点,而数据合成的研究也不断反向推动生物系统的各种性质的理解。最后,文章总结了利用合成数据进行科学探索、物理学研究、多模态学习等领域的特点及相关挑战,这一块的内容非常精炼,对相关主题感兴趣的小伙伴可以通过参考文献进行扩展,非常有价值的研究线索。
Abstract:
Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple …
>>>
Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple concepts. Thus, the main bottleneck is often access to supervised data. Here, we highlight a trend in a potential solution to this challenge: synthetic data. Synthetic data are becoming accessible due to progress in rendering pipelines, generative adversarial models, and fusion models. Moreover, advancements in domain adaptation techniques help close the statistical gap between synthetic and real data. Paradoxically, this artificial solution is also likely to enable more natural learning, as seen in biological systems, including continual, multimodal, and embodied learning. Complementary to this, simulators and deep neural networks (DNNs) will also have a critical role in providing insight into the cognitive and neural functioning of biological systems. We also review the strengths of, and opportunities and novel challenges associated with, synthetic data.
<<<
翻译
32.
尹志
(2022-05-30 13:31):
#paper https://doi.org/10.48550/arXiv.1907.05600 Generative Modeling by Estimating Gradients of the Data Distribution NeurIPS 2019 (Oral) (2019). 继续生成模型啊,这篇文章作者提出了一种基于评分的生成模型。我们知道现在主流的生成模型基本可以分为likelihood-based和类似GAN那样通过对抗而不计算具体的概率密度函数的隐式模型。前者的代表如VAE、normalizing flow等。而本文的模型也属于这个范畴。在这类模型中,由于需要对条件概率进行积分,归一化常数Z的计算非常困难,因此派生出各类解决方法。本文其核心思想是通过对概率密度的梯度进行建模估计(准确来说是对log概率密度函数)。这里的log概率密度函数的梯度被定义为score function,而作者也是通过评分匹配(score matching)进行估计的。在生成模型建立之后,进而通过Langevin动力学进行采样,即生成样本。部分细节还在推,代码也在复现中,感觉是一类比较有效的生成模型,生成图片的质量较高,改进版本已经可以和GAN的生成质量一较高下。但目前最大的问题是废卡,非常废卡,希望后面自己可以在如何提高其训练效率及抽样效率上做一些工作。
arXiv,
2019.
DOI: 10.48550/arXiv.1907.05600
Abstract:
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard …
>>>
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments.
<<<
翻译
33.
尹志
(2022-04-28 22:10):
#paper https://doi.org/10.48550/arXiv.1503.03585 Deep Unsupervised Learning using Nonequilibrium Thermodynamics ICML (2015). 这是一篇还没完全看懂的论文,但是非常有意思。说起这篇文章的扩散模型大家一不定熟悉,但是提到最近大火的openai的工作dall-e 2,大家可能会更熟悉一点。对,Dall-E 2最早的启发就是这篇文章。本文受非平衡热力学的启发,设计了一个称之为扩散模型(diffusion model)的生成模型。我们知道,在机器学习中,对一堆数据的分布进行估计是一个极具挑战的事情。特别是要兼顾模型的灵活性(flexible)和过程的可解性(tractable)。如果把建模隐变量z到观测量x的映射作为任务,那么扩散模型的想法是,
假设整个映射是一个马尔科夫链(MC),然后数据的初始状态是由一步步不断添加高斯噪声,最终获得某种最终形态,那么反过来,可以将去噪的过程看做是生成的过程。我们针对这个MC过程进行训练,那么逆过程则可以作为生成模型生成符合分布的数据。是的,很像VAE。考虑到这类生成模型通过不断的改进,已经达到Dall-E 2的效果,值得我们深入理解背后的机制,以及是否可以在数据合成上产生更好的效果。
arXiv,
2015.
DOI: 10.48550/arXiv.1503.03585
Abstract:
A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. …
>>>
A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.
<<<
翻译
34.
尹志
(2022-03-25 14:10):
#paper doi:10.1109/CVPR.2015.7298682, 2015, FaceNet: A unified embedding for face recognition and clustering. 这是一篇人脸检测领域的经典论文。Google写的,发在2015年的CVPR上。在LFW数据集上刷到99.63%的分数,在YouTube Faces DB上也刷到95.12%,当时的SOTA。虽然讲的是人脸检测,但其思想适合于非常多的场景,包括各类图像识别问题,自然语言处理问题等。文章引入了一套端到端的训练方式,直接对嵌入空间进行建模。其想法非常直接,即通过嵌入空间建模,将每张人脸映射到嵌入空间的一个点。在这样的嵌入下,相同id的人脸应该接近,而不同id的人脸应该远离,那么这样的嵌入方式,可以理解成一个特征处理器,从而对后续人脸检测、识别、聚类等动作做出高效的预先计算。网络结构部分比较简单,主要用的是当时还很新鲜的inception网络,有趣的是它的loss,文章引入了triplet loss的概念,即anchor-pos对,anchor-neg对进行距离计算。其中anchor为某id对应图片,pos为该id对应的其它人脸图片,neg为非该id的人脸图片。思想很简单,就是通过训练,让anchor-pos对的距离很小,anchor-neg对的距离很大。这里的loss在数学上,就表示为anchor-pos对的距离-anchor-neg对的距离+alpha。这里的alpha可以理解为一个约束,其将同一个id的脸约束在一个流形上且保度规。当然,在实践训练中,triplet的选择也很重要,有兴趣的可以看paper。虽然文章比较老,所用的网络结构也很老,但是其简单的思想,有效的结果都给后续的很多识别工作,不论是研究还是工业实战层面带来巨大的启发。比如做word2vec的小伙伴肯定会心有戚戚焉。
Abstract:
No abstract available.
35.
尹志
(2022-02-08 23:23):
#paper doi: 10.7554/eLife.58906 Anna A Ivanova, et al. Comprehension of computer code relies primarily on domain-general executive brain regions. eLife 2020;9:e58906(2020). 这是我在看一本编程小册子的时候作者引的一篇神经科学的研究工作。文章探讨了编程作为一项认知活动,到底是什么认知与神经机制在支撑它?研究者用fMRI技术对两类大脑系统进行了考察:1. multiple demand (MD) system;2. language system。 前者在数学、逻辑、解决问题中被常使用;后者在语言处理中被常使用。作者使用python和ScratchJr两种编程方式(基于文本的和基于图形界面的)进行编码和进行句子的内容匹配。他们发现MD系统在两种编程方式中,对编码活动都有强烈的反应;语言系统则只对句子的内容匹配有强烈的反应,对编码活动的反应很弱。当然这就一定程度上说明了编程活动是一项类似问题解决或者数学解题这样的认知活动。虽然编码很多时候是文字的形式,我们也习惯说编程语言,但处理它的大脑认知机制从实验上来看,似乎并不对应于常规的语言处理。
Abstract:
Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two …
>>>
Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two candidate brain systems: the multiple demand (MD) system, typically recruited during math, logic, problem solving, and executive tasks, and the language system, typically recruited during linguistic processing. We examined MD and language system responses to code written in Python, a text-based programming language (Experiment 1) and in ScratchJr, a graphical programming language (Experiment 2); for both, we contrasted responses to code problems with responses to content-matched sentence problems. We found that the MD system exhibited strong bilateral responses to code in both experiments, whereas the language system responded strongly to sentence problems, but weakly or not at all to code problems. Thus, the MD system supports the use of novel cognitive tools even when the input is structurally similar to natural language.
<<<
翻译
36.
尹志
(2022-01-31 12:53):
#paper doi:10.1038/nature14539 LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). 这是深度学习三巨头于2015年写的一篇nature综述。也是nature纪念AI60周年的一系列综述paper里的一篇。这篇paper综述了深度学习这一热门主题。当然,作为深度学习的几位奠基人,确实把深度学习的概念原理应用写的深入浅出。本文从监督学习一直介绍到反向传播,主要综述了CNN和RNN的原理及其应用,很适合初学者全面了解(当时)的深度学习的概貌。在最后一段深度学习的未来一节,作者对无监督学习的未来报以热烈的期望,看看这几年,特别是yann lecun大力推动的自监督成为显学,也算是念念不忘必有回响了。
Abstract:
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in …
>>>
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
<<<
翻译
37.
尹志
(2022-01-18 23:37):
#paper doi:10.1038/s41416-020-01122-x Deep learning in cancer pathology: a new generation of clinical biomarkers. British Journal of Cancer, 2020 Nov 18. 这是一篇综述,综述了一下深度学习从病理图像直接抽取biomarker的相关概念,以及病理图像中利用深度学习做的各种基本的和进阶的图像分析任务。
我们知道,在肿瘤的临床治疗中会基于各种分子生物标记物。但这些分子标记物都比较耗时费力。而且一般而言,这些分子标记物都需要tumour tissue。 但其实tumour tissue上有很多信息我们现在都没好好利用。利用深度学习,我们可以直接从常规病理图像中提取更多信息。从而提供潜在的具有临床价值的信息。
里面介绍的基本任务包括:检测、评级、tumour tissue亚型预测。这些任务的目的是自动化病理诊断流程,但结论不形成直接的临床决策。(辅助诊断呗)。
进阶任务可直接影响临床决策:比如分子特性推断、生存率预测、端到端的疗效预测。这些任务都可以直接影响临床决策,但目前需要更好的临床验证。比如需要更多前瞻性实验的验证。(就是还不能用呗)。
Abstract:
Clinical workflows in oncology rely on predictive and prognostic molecular biomarkers. However, the growing number of these complex biomarkers tends to increase the cost and time for decision-making in routine …
>>>
Clinical workflows in oncology rely on predictive and prognostic molecular biomarkers. However, the growing number of these complex biomarkers tends to increase the cost and time for decision-making in routine daily oncology practice; furthermore, biomarkers often require tumour tissue on top of routine diagnostic material. Nevertheless, routinely available tumour tissue contains an abundance of clinically relevant information that is currently not fully exploited. Advances in deep learning (DL), an artificial intelligence (AI) technology, have enabled the extraction of previously hidden information directly from routine histology images of cancer, providing potentially clinically useful information. Here, we outline emerging concepts of how DL can extract biomarkers directly from histology images and summarise studies of basic and advanced image analysis for cancer histology. Basic image analysis tasks include detection, grading and subtyping of tumour tissue in histology images; they are aimed at automating pathology workflows and consequently do not immediately translate into clinical decisions. Exceeding such basic approaches, DL has also been used for advanced image analysis tasks, which have the potential of directly affecting clinical decision-making processes. These advanced approaches include inference of molecular features, prediction of survival and end-to-end prediction of therapy response. Predictions made by such DL systems could simplify and enrich clinical decision-making, but require rigorous external validation in clinical settings.
<<<
翻译