JourneyToCoding

Code for Fun

图傅里叶变换是傅里叶变换在离散的图信号上的形式。它是图神经网络,特别是图卷积神经网络的理论依据。

Read more »

拉普拉斯算子是图片边缘检测和图神经网络中常用的概念,它能够揭示于多元函数的凹凸性以及某一点函数值与周围点函数值的关系。

Read more »

图神经网络(GNNs)是以图作为数据集的神经网络,它在近十年取得了很大的进展,特别是图卷积神经网络(GCNs)及其变体的出现,使得GNNs在车流量预测、推荐系统等实际领域中取得了不错的进展。

Read more »

这篇论文发表于前两篇论文之后,但它没有受到Distribution Matching的影响,试着融合核心集法和蒸馏法,而是回到了最原始的参数匹配法。它对原始的参数匹配法进行了改进,并取得了较好的性能,且内存消耗和速度也有了较大的优化。

Read more »

这篇论文提出的方法是对作者提出的基于梯度匹配的数据缩合法的改进。后者的缩合需要进行两个方向上的梯度下降还要求二阶导,这需要大量的计算,限制了其在大数据集上的应用。这篇论文提出的基于分布匹配的数据缩合则很好地解决了这个问题。

Read more »

这篇论文是对传统的数据蒸馏(Dataset Distillation)的一种改进,称“数据缩合”(Dataset Condensation)。这是数据蒸馏领域最具突破性的研究之一。该论文首次提出了梯度匹配策略,以该策略蒸馏出来的数据集的测试准确率和泛化精度都有了极大的提升。

Read more »

The transformer which is distinguished by its adoption of self-attention and multi-head attention, is a deep learning model using the encoder-decoder architecture. It can be used in both CV and NLP. The encoder of transformer generates BERT and the decoder of transformer generates GPT.

Read more »

Attention mechanisms is a layer of neural networks added to deep learning models to focus their attention to specific parts of data, based on different weights assigned to different parts. Just as the neural network is an effort to mimic human brain actions in a simplified manner, the attention mechanism is also an attempt to implement the same action of selectively concentrating on a few relevant things while ignoring others in neural networks.

Read more »

The Encoder-Decoder Architecture views neural networks in a new perspective. It takes the neural network a kind of signal processor which encode the input and decode it to generate output.

Read more »
0%