博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Neural Network Week 1 & Week 2
阅读量:6435 次
发布时间:2019-06-23

本文共 2352 字,大约阅读时间需要 7 分钟。

1. Different types of neurons

  • Linear neurons
  • Binary threshold neurons
  • Recitified linear neurons
  • sigmoid neurons
  • Stochastic binary neurons

 

2. Reinforcement learning

Learn to select an action to maximize payoff.

– The goal in selecting each action is to maximize the expected sum

of the future rewards.
– We usually use a discount factor for delayed rewards so that we
don’t have to look too far into the future.

Reinforcement learning is difficult:

– The rewards are typically delayed so its hard to know where we

went wrong (or right).
– A scalar reward does not supply much information.

 

3. Main types of neural networks architecture

  • Feed-forward  
    • The first layer is the input and the last layer is the output
    • They compute a series of transformations that change the similarities between cases
    • The activities of the neurons in each layer are a non-linear function of the activities in the layer below

  

  • Recurrent
    • They have directed cycles in their connection graph
    • They have complicated dynamics
    • It is a very natural way to model sequential data
      • They are equivalent to very deep nets with one hidden layer per time slice
      • They use the same weights at every time slice and they get input at every time slice.
    • They have the ability to remember information in their hidden state

      

     

  • Symmetrically connected networks
    • They are like recurrent networks but the connections between units are symmetrical(same weights in both directions)

 

4. Perceptrons

  • Add an extra component with value 1 to each input vector. The “bias” weight on this component is minus the threshold. Now we can forget the threshold.
  • Pick training cases using any policy that ensures that every training case willkeep getting picked.
    • If the output unit is correct, leave its weights alone
    • If the output unit incorrectly outputs a 1, subtract the input vector from the weight vector
    • If the output unit incorrectly outputs a zero, add the input vector to the weight vector.
  •  This is guaranteed to find a set of weights that gets the right answer for all the
    training cases if any such set exists.

 

5. The limitations of Perceptrons

  • once the hand-coded features have been determined, there are very strong limitations on what a perceptron can learn
  • the part of a Perceptron that learns cannot learn to do this if the transformations form a group

 

转载于:https://www.cnblogs.com/climberclimb/p/7096778.html

你可能感兴趣的文章
服务器数据丢失紧急处理办法
查看>>
分享23款使用纯CSS3生成的知名logo设计
查看>>
在vSphere ESXi6 中成功安装 Nexus 1000v n1000v-dk9.5.2.1.SV3.1.10
查看>>
MaxCompute Optimizer之表达式约化
查看>>
聊聊lombok构造模式的参数校验
查看>>
Linux基础学习—2
查看>>
iOS技巧之获取本机通讯录中的内容,解析通讯录源代码
查看>>
程序员从零到月薪15K的转变,python200G资料分享
查看>>
DNS域名解析的知识了解
查看>>
部署社交网站
查看>>
CentOS下如何修改主机名
查看>>
“机器人商店”是什么?卖机器人的吗?
查看>>
SVN的代码正确提交方法
查看>>
js框架 vue
查看>>
tomcat关闭时进程未退出
查看>>
Git分支管理策略
查看>>
给每一辆车配上“×××”,老牌安企高新兴的交通新作
查看>>
VLAN与Trunk的配置
查看>>
瞎写 论售后服务
查看>>
求四边形的数量
查看>>