Donny
Donny
Hello, welcome to... Wait, stop clicking my avatar!
10

Latest

10

最近は化物語【ばけものがたり】を見た。かなり独特なアニメと思う。よく考えば、もう十二年前の作品だった。 化物語は名前の通り、怪異【かいい】の物語だ。阿良【あららぎ】々木 暦【こよみ】が解決した怪異に遭った人を順番に、戦場【せんじょう】ヶ原【はら】 ひたぎ、八九寺【はちくじ】 真宵【まよい】、神原【かんばる】 駿河【するが】、千石【せんごく】 撫子【なでこ】、羽川【はねがわ】 翼【つばさ】五つの人物を描いた。化物語の名前だけど、今よく思えば、人の物語だった。人の不幸の物語。怪異は自分で存在することなく、特定な相手しか襲わない。そしてどれの怪異も、人が起きたことの結果に過ぎない。怪異は人の不幸で起きられ、人の不幸を示す。怪異の解決は、人の心を優しく慰めることも意味するだろう。 戦場ヶ原は重
Transformer Attention is all you need - Arxiv Feed Forward: Two fully connected layers (Linear) with a ReLU activation in between. Multi-head Attention: Attention Attention - Qiita Self-Attention GPT (Generative Pre-Training) GPT - paper BERT (Bidirectional Encoder Representations from Transformers) BERT - Arxiv BERT Explained Different from the Transformer (GPT) which is trained using only the left context, BERT uses bidirectional encoder which makes use of both the left and the right context. [MASK] is used to mask some of the words so that the model will not see the word itself indirectly. Pre-training of BERT makes use of two strategies: MLM (Masked Language Model) and NSP (Next Sentence Prediction). The model is trained with both the strategies together. As shown below, the input embeddings of BERT consists of the token embeddings, the segment embeddings, and the position embeddings. Note that a segment may consists of multiple sentences. In MLM task,
最近このゲームがめっちゃはやってるのて、調べだら、美しいゲーム画面に惹かれて、つい買っちゃった(・ω<) テヘペロ。ちょっと遊んだら、やはりゲームデザイン的に面白いものもあるので、ちょっと分析してみようと思う。 アニメーション まずどう見ても、このかわいらしいアニメーション。 エンダーリリーズ アニメーション 黒騎士が攻撃の時、白巫女の可憐な動きがポイントでした。 いいや。黒騎士の攻撃もそこそこかっこいい。 プレーヤにシステムを紹介する時の誘導 ここは本来隠した通路です。その前に閃いてる切【き】れ端【はし】があって、プレーヤの誘導になる。そして隠し通路には進むに必要なレバーがある。ここは初めての隠し通路ので、プレーヤに「隠し通路はあるんだよ」っていうメッセージを伝うために設置したと思
偶遇了这首治愈的歌。 Youtube player containerBack number - 水平線 一开始看到 MV 我还在想真是很省经费。但从主角收伞的那一刻就变得有趣(有趣?)起来了。一开始一直是被画面吸引,但注意到歌词后就有点被击中的感觉。 Back number 的旋律一直很不错,而歌词又能够很好的与旋律融合,相益得彰。歌词再加上画面,就有一种把压力逐渐释放的感觉。 但还是想吐槽下为什么最后脱鞋要脱四次,换我肯定一把把鞋子袜子一块抓下来。(但是刚才又欣赏了一遍,好像也还连贯。划掉吐槽。) 又看了一遍,最后的长镜头拍摄还是蛮有 feel 的。(总感觉越欣赏越满意了🐶)。 另外顺带安利下这个组合的 瞬き 。也是同样的风格。 有向日葵的气息。 我一开始知道这首歌是看到了那个边玩游戏边唱歌,结果突然开始合唱的视频。也蛮久之前的了。直到现在听到这首歌还会想起

Followers

587
一开始我是被 PV 吸引了。有点意思,我想。女主有点酷;虽然 PV 完全意味不明,但是就是很戳。大概是因为色彩主题和几分前卫的内容。 第一集看下来,女主(似乎中文翻译应该是叫夏目?)果然很酷,而且穿衣也很潮。(虽然她那个画家朋友比女主还拽就是了。)而且女主手机,屏幕碎了都舍不得换,不得不说细节简直了。后来看到说是演员自己觉得这样的细节更符合女主的人设。天,秒粉了(不是)。而放在写这篇评论的现在,我的手机屏幕也碎了 n 久都还不想换,就更有共鸣了(笑)。 如果单看女主的线,其实剧情还是蛮套路的;但却不乏味。而这部剧的其实不止一个主角。摄影师奈良和她的朋友们也占了同等重要的戏份。这是一部以多名女性为主角,藉此反映社会现象和新文化的番剧。这部剧融合的文化元素可谓是前卫的大杂烩。男主是一名 Youtuber ;
亲爱的维: 展信安。 我正在驶往多利的列车上给你写这封信。当你收到这封信的时候,我应该正好到达多利郊外的那片一望无际的太阳花田。去年我们约定好,要三个人一起去多利寻找属于我们的生活;翼跟我们描述过的,梦幻般的场景。三人一同躺在多利的太阳花田里,望着天空幽幽飘过的白云,听着鸟鸣,让时间静静流淌;一同穿行在多利的大街小巷,喂饲中央广场的鸽子,在西卡丽大教堂做祷告;一同去看女王的诞辰盛典,尝遍节日限定的美食,在希尔河畔看着烟花绽放。虽然我们三人的愿望已经不可能实现了,但我还是想去多利生活一段时间,至少替翼去感受这一切。 自那以来已经过去近一年了。当初我们约定好三人要做永远的死党,孰料竟发生了这样的变故,彻底地改变了我们的世界。不止是我们,这世界发生的变化颠覆了我们每一个人的想象。我们曾经所处
Tic-Tac-Toe Online Server Base on the Tic-Tac-Toe Game of CS188, Berkeley, I develop an online version of Tic-Tac-Toe. Now your agent can play with my agent online! I think it is a good way to check whether our agents are optimal or not. My agent can beat random agents most of the time even if my agent is the second player. Online Server Website: Tic-Tac-Toe Online Download the attached client file from the moodle form and place it in the same directory of your solveTicTacToe.py (solveTicTacToe.py depends on util.py , so util.py is also needed). Run this command: $ python3 TicTacToeOLClient.py -u demo -n 3 And Enjoy! Notice: You need to specify a username with "-u USERNAME". Don't use "demo" as your username cause it is forbidden. Usage: TicTacToeOLClient.py [options] Options Variable Description -u USERNAME Username, must not be empty nor
The Question Fish eating fruit on jisuanke Given an undirected acyclic graph G, all possible path P in the graph, calculate: The first taste In the contest, a handsome foriegn teammate conviced me that this problem can be solve using LCA. I tried. And it did work, with the help of dijkstra. My solution is to, first of all, run dijkstra, and get the distance between root node and every other nodes. Then, calculate the LCA for every two nodes. The desired result is: It worked, but we got TLE for looping though all the nodes, which is . The second trial After the contest, I was told that this is a DP problem. You calculate the times an edge is accessed, times it with the weight, sum them up by the modulus of 3, you got the result. This one, however, also got TLE. Oh, FISH! The final solution The reason why the second solution still can
Overview Regions with CNN features: Efficient Graph Based Image Segmentation use disjoint set to speed up merge operation Selective Search HOG (Histogram of Oriented Gradient) Multiple criterions (color, texture, size, shape) to merge regions AlexNet/VGG16 R-CNN Notice that many descriptions are replicated from the orignal sources directly. Some Fundermental Conceptions Batch Size Stochastic Gradient Descent. Batch Size = 1 Batch Gradient Descent. Batch Size = Size of Training Set Mini-Batch Gradient Descent. 1 < Batch Size < Size of Training Set Regularization A regression model that uses L1 Regularization technique is called Lasso Regression and model which uses L2 is called Ridge Regression. Ridge Regularization Ridge regression adds "squared magnitude" of coefficient as penalty term to the loss function. The first sum is an example of loss function. Lasso Regularization Lasso Regression (Least Absolute Shrinkage and Selection Operator) adds "absolute value of magnitude" of coefficient as penalty term to the loss function.
Useful Materials Distinctive Image Features from Scale-Invariant Keypoints[1] by David G. Lowe. SIFT(Scale-Invariant Feature Transform)[2] on Towards Data Science. The SIFT (Scale Invariant Feature Transform) Detector and Descriptor[3]. Notes Uses DoG (Difference of Gaussian) to approximate Scale-normalized LoG (Laplacian of Gaussian)[4]. where is the two dimensions Gaussian function, and is the input image. [need more consideration] After each octave, the Gaussian image is down-sampled by a factor of 2, by resampling the Gaussian image that has twice the initial value of by taking every second pixel in each row and column. And we start on the new octave with . Since the image size is reduced to 1/4, the sigma for the next octave becomes , which is equal to . To understand it, frist consider this question: If the image size is reduced to 1\4, but the kernel size of