蓝海你真敢跟我讨论人工智能Montecarlo的事? 知道天高地厚吗?
所有跟贴
·
加跟贴
·
新语丝读书论坛
送交者: 短江学者 于 2016-03-15, 21:53:10:
所有跟贴:
短江学者就这么牛这B,要是长江的来了那就更了不起了啊哈哈^_^ (无内容)
-
catfish
(0 bytes)
2016-03-16, 14:53:01
(804050)
天高地厚,江短流长。 (无内容)
-
短江学者
(0 bytes)
2016-03-16, 15:17:45
(804052)
什么样的江河,江短流长? (无内容)
-
pgss
(0 bytes)
2016-03-16, 19:24:52
(804056)
之江短黄河长, 中文无一定之规。比如Artificial intelligence 其实应该翻作假智能 (无内容)
-
短江学者
(0 bytes)
2016-03-16, 19:39:29
(804057)
都不是同一条,J短B长真遗憾。 (无内容)
-
pgss
(0 bytes)
2016-03-16, 20:02:29
(804059)
I think Silicon Intelligence is a good name (无内容)
-
conner
(0 bytes)
2016-03-16, 19:52:36
(804058)
MCTS并不是AI,仅仅是AI里的一个工具
-
1FD7
(88 bytes)
2016-03-16, 10:43:04
(804040)
哈哈我认为AI是 a load of crap - 那个是call his bluff, AI还不会这个:) (无内容)
-
短江学者
(0 bytes)
2016-03-16, 11:00:52
(804041)
Yoshua Bengio on AI (or AS- artificial stupidity, but not totally BS)
-
短江学者
(1123 bytes)
2016-03-16, 11:20:39
(804042)
In your own words? This quote only tells some issues (无内容)
-
1FD7
(0 bytes)
2016-03-16, 14:26:19
(804049)
That's not how human mind works. No clear path to analogical thinking. (无内容)
-
短江学者
(0 bytes)
2016-03-16, 14:59:04
(804051)
AI can be done by a tech-savvy. (无内容)
-
cornbug
(0 bytes)
2016-03-16, 01:31:26
(804017)
低调一些的好
-
html
(189 bytes)
2016-03-16, 00:47:58
(804014)
蓝海说的和这没矛盾啊? (无内容)
-
pgss
(0 bytes)
2016-03-16, 01:10:48
(804015)
如果我错了,或者不懂我会说,请不惜赐教。这叫什么不知天高地厚
-
bluesea
(66 bytes)
2016-03-15, 22:56:46
(804006)
说着玩的你不是常说从前打游戏怎么怎么吗人工智能理论对于我就是像打游戏对于你。 (无内容)
-
短江学者
(0 bytes)
2016-03-15, 23:01:50
(804008)
这是为了激将我上你的贼船吧。 (无内容)
-
bluesea
(0 bytes)
2016-03-15, 23:10:09
(804010)
别介那是 call your bluff. 不过看看自然文章方程(1)觉得如何。 (无内容)
-
短江学者
(0 bytes)
2016-03-15, 22:10:41
(804001)
另外这是我的前贴最后一句说了mc
-
短江学者
(542 bytes)
2016-03-15, 22:13:28
(804002)
这和我理解还是一致的。不过有些问题,不知各位专家可否解答、讨论
-
pgss
(731 bytes)
2016-03-16, 00:41:30
(804013)
if you understand the memorization I mentioned, then you know the answers
-
conner
(602 bytes)
2016-03-16, 08:47:45
(804020)
奇怪的事情就在这里
-
008
(252 bytes)
2016-03-16, 09:06:51
(804024)
真有可能是狗在作弊放水。那我就输得冤了。 (无内容)
-
pgss
(0 bytes)
2016-03-16, 10:24:16
(804033)
no chance (无内容)
-
conner
(0 bytes)
2016-03-16, 10:27:05
(804036)
唯一解释就是它认为这是败招,不需建树。 (无内容)
-
pgss
(0 bytes)
2016-03-16, 09:38:36
(804026)
no, only when the tree grow decently deeper from there
-
conner
(161 bytes)
2016-03-16, 09:46:17
(804027)
but it wasn't in the tree. hassabis said. (无内容)
-
pgss
(0 bytes)
2016-03-16, 09:59:40
(804030)
what is the it? the move 78? (无内容)
-
conner
(0 bytes)
2016-03-16, 10:07:05
(804031)
Yes。he said 78 is less than one in 10k chance in its estimate (无内容)
-
pgss
(0 bytes)
2016-03-16, 10:15:49
(804032)
then what is the criteria to build a new scenario? (无内容)
-
pgss
(0 bytes)
2016-03-16, 20:44:19
(804062)
That means it was in the tree with a lower Bayesian value
-
conner
(89 bytes)
2016-03-16, 10:26:11
(804034)
the program has a time limitation (run limitation) for each move
-
conner
(366 bytes)
2016-03-16, 09:14:40
(804025)
The time limit was 5 seconds per move when against Fan Hui (无内容)
-
conner
(0 bytes)
2016-03-16, 12:47:05
(804048)
当对手下出意料之外的棋
-
008
(442 bytes)
2016-03-16, 11:49:15
(804043)
time/resource allocation during a game is a deep-deep problem
-
conner
(377 bytes)
2016-03-16, 12:05:27
(804046)
It is like to predict how long 'Lee' and itself will take for next N moves (无内容)
-
conner
(0 bytes)
2016-03-16, 12:10:12
(804047)
make no sense again (无内容)
-
008
(0 bytes)
2016-03-16, 10:33:37
(804037)
the time limit for each move or tree operation? (无内容)
-
conner
(0 bytes)
2016-03-16, 11:50:12
(804044)
time limit (无内容)
-
008
(0 bytes)
2016-03-16, 12:00:06
(804045)
thanks. but after a move isnt it already a new scenario? (无内容)
-
pgss
(0 bytes)
2016-03-16, 08:50:18
(804022)
No, it is very likely still in the tree. The tree is memorized and updated. (无内容)
-
conner
(0 bytes)
2016-03-16, 08:52:05
(804023)
sorry 自然文章方程没有号将就看看abstract吧。
-
短江学者
(1508 bytes)
2016-03-15, 22:24:05
(804003)
if reinforcement learning is an inherent physioneurological mechanism
-
cornbug
(45 bytes)
2016-03-16, 01:28:11
(804016)
it is because their craziness have not be properly evaluated (无内容)
-
conner
(0 bytes)
2016-03-16, 08:49:19
(804021)
自然文章标题和作者
-
短江学者
(485 bytes)
2016-03-15, 22:55:43
(804005)
如何使用 policies和如何学习 policies 是两回事情。 (无内容)
-
bluesea
(0 bytes)
2016-03-15, 22:55:16
(804004)
懂不懂自然文章也是两回事情。增强学习主要思想就是边用边学。 (无内容)
-
短江学者
(0 bytes)
2016-03-15, 22:59:16
(804007)
所以你认为,增强学习是alphago的几乎唯一或者最基础和主要的算法框架。
-
bluesea
(20 bytes)
2016-03-15, 23:08:41
(804009)
应该是吧。棋谱和初始的policy network只是收缩范围,减少选项 (无内容)
-
pgss
(0 bytes)
2016-03-15, 23:21:27
(804012)
是他们文章标题和摘要说的。我只能看文章说话。 (无内容)
-
短江学者
(0 bytes)
2016-03-15, 23:15:10
(804011)
加跟贴
笔名:
密码:
注册笔名请按这里
标题:
内容: (
BBCode使用说明
)