Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Weight matrix of a neural network that fails to grow connections as two parallel layers are merged.

  • Joseph Van Name
  • 2024-01-11
  • 2109
Weight matrix of a neural network that fails to grow connections as two parallel layers are merged.
machine learningartificial intelligenceAI
  • ok logo

Скачать Weight matrix of a neural network that fails to grow connections as two parallel layers are merged. бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Weight matrix of a neural network that fails to grow connections as two parallel layers are merged. или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Weight matrix of a neural network that fails to grow connections as two parallel layers are merged. бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Weight matrix of a neural network that fails to grow connections as two parallel layers are merged.

The video is the animation of the weight matrix (absolute values of weights shown) of a neural network. The neural network was trained to memorize random data. In the first part of the training, connections in this weight matrix are (mostly) ablated (this means the connections are removed) at random after each round. In the first part of training, we have two parallel layers without the connections between these layers. In the second part of the training, these connections are no longer ablated and the two parallel layers are merged, but the neural network only partially regrows these connections. It would be great if we could easily merge two neural networks that are trained to perform the same task, but the merged neural network has trouble growing connections between the two layers, so after training the two parallel layers do not interact very much.

The non-interactivity of the layers after merging suggests that the neural networks lack interpretability since not even parallel layers in the same network are capable of interpreting each other and working with each other.

This animation demonstrates that neural networks retain random information even after being retrained to not retain such information (in this animation, we see that neural networks retain the same structure while being retrained, but from this animation, we cannot make any claims about any retained functionality). This is not a good thing for several reasons. If neural networks retain structure after a change in the way they are trained, then they will also retain random information from initialization (I have performed computer experiments to show that this is the case), and such random information hampers attempts to interpret the neural networks. Furthermore, the retention of random information means that after retraining, neural networks may retain unwanted abilities (catastrophic forgetting limits the ability of these networks to retain such abilities, but the catastrophic forgetting of neural networks is far from complete).

One possible good thing about the retention of information even after retraining is that it is easier to ablate (remove) unwanted portions of these networks, but in practice, ablation does not work very well, and ablation at the very least requires these networks to be interpretable while the retention of unwanted information is a sign of non-randomness of these networks.

The notion of a neural network is not my own.

I am making neural network animations now in order to contrast the behavior of neural networks with the behavior of the other machine learning algorithms that I am actually working on such as LSRDRs. If we were doing a similar animation with LSRDRs, the LSRDR will regrow all the ablated connections since LSRDRs tend to have very few local maxima. And at the least, after retraining, the LSRDR will not leave behind scars from ablation after being permitted to regrow. LSRDRs are capable of regeneration.

Unless otherwise stated, all algorithms featured on this channel are my own. You can go to https://github.com/sponsors/jvanname to support my research on machine learning algorithms. I am designing machine learning algorithms for AI safety such as LSRDRs. In particular, my algorithms are designed to be more predictable and understandable to humans than other machine learning algorithms, and my algorithms can be used to interpret more complex AI systems such as neural networks. With more understandable AI, we can ensure that AI systems will be used responsibly and that we will avoid catastrophic AI scenarios. There is currently nobody else who is working on LSRDRs, so your support will ensure a unique approach to AI safety.

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]