Tech
14

2022

1

12月10日 土曜  前は日記に置いてるが、やはりテックのものを別記事にまとめます。 Stable Diffusion  Stable Diffusionを安定で高質の絵を出力させるには呪文が必要らしい。この文章を参考しました。最初はプロンプトだけを使ってるが、顔や体が歪んでしまいがち。ネガティブプロンプトがあることを思い出して、試しに入れてみたら歪みが大分治まった。  現時点で一番いいプロンプト。 --prompt "${normal_prompt_content} \ with fantastic lighting fantastic composition \ in high production quality" \ --negative-prompt "twisted face twisted body closeup view"  一部の出力。 銃を持つ学校制服を着てる少女 --prompt "a cute anime girl in school skirt uniform holding a gun \ with fantastic lighting fantastic composition \ in PlayStation5 octane render style" \ --negative-prompt "twisted face twisted body closeup view" 白い

2020

1

Nov 04 WebVR requires HTTPS on Oculus Quest. WebVR seems to require enabling Universal PR (Will cause some issues) Universal PR supports ShaderGraph, which is good. But it does not support point light shadows, which is bad. By default, Universal PR only enable shadows for the main light. Go to UniversalRenderPipelineAssset > Lighting > Additional Lights, enable Cast Shadows. You may also want to enable Shadows > Soft Shadows. The PostProcessing Volume for Universal PR is Create > Volume > Global Volume. Some suggest creating a empty GameObject and add PostProcessingVolume to it. Do not work for me. (Maybe it is because the Volume Mask is set to Default) Nov 09 Unity do not have Mirror support for build-in pipeline and URP. HDRP has planar reflection. To make planar reflection work in URP, you may use some tricks like reflection probe or simple reflection camera logics. But these only work for small planar reflection. For larger mirror, they are broken

2019

4

Tic-Tac-Toe Online Server Base on the Tic-Tac-Toe Game of CS188, Berkeley, I develop an online version of Tic-Tac-Toe. Now your agent can play with my agent online! I think it is a good way to check whether our agents are optimal or not. My agent can beat random agents most of the time even if my agent is the second player. Online Server Website: Tic-Tac-Toe Online Download the attached client file from the moodle form and place it in the same directory of your solveTicTacToe.py (solveTicTacToe.py depends on util.py , so util.py is also needed). Run this command: $ python3 TicTacToeOLClient.py -u demo -n 3 And Enjoy! Notice: You need to specify a username with "-u USERNAME". Don't use "demo" as your username cause it is forbidden. Usage: TicTacToeOLClient.py [options] Options Variable Description -u USERNAME Username, must not be empty nor
AutoTag AutoTag is a program that generate tags for documents automatically. The main process includes: Participle (N-gram + lookup in dictionary) Generate bag-of-words for each document. Calculate term frequency and inverse document frequency. Pick top x words with greater tf-idf values as tags. N-gram N-gram generate a sequence of n words in every position of a sentence.[1] sentences = 'Lucy like to listen to music. Luna like music too.' items = ngram(sentences, 2) print(items) # output: [ 'Lucy like', 'like to', 'to listen', 'listen to', 'to music', 'Luna like', 'like music', 'music too', ] Bag of words The bag-of-words model is a simplifying representation in NLP and IR.[1] N-gram Count the times that each word appears

w3m

104
w3m: WWW wo Miru (c) Copyright Akinori ITO w3m is a pager with WWW capability. It IS a pager, but it can be used as a text-mode WWW browser. Keyboard Shortcuts Shortcut Action Level H Help Brower q Quit w3m Brower C-h History Brower T New tab Tabs C-j Open link on current tab Tabs C-t Open link on new tab Tabs C-q Close tab Tabs U Go to url Page R Reload Page B Back Page Configuration File Location Keymap file ~/.w3m/keymap
Permission Control for NTFS We often encounter the problem that to mount NTFS under Linux means no permission control. But that is not true. According to JanC's Answer on AskUbuntu: Contrary to what most people believe, NTFS is a POSIX-compatible filesystem, and it is possible to use permission on NTFS. The First Trial First let's just open /etc/fstab and see how partitions are mounted. $ sudo nano /etc/fstab In my situation, the NTFS partitions are mounted as followed: /dev/sda1 /mnt/NTFS1 auto nosuid,nodev,nofail,x-gvfs-show 0 0 /dev/sda2 /mnt/NTFS2 auto nosuid,nodev,nofail,x-gvfs-show 0 0 The nosuid keyword prevents setting uid on filesystem. So the First step is to remove this keyword. However, once this keyword is removed, an uid and a gid must be given to setup the permission control. By default, the current uid and the current gid will be used. We can also specifiy

2018

7

List # list filter table sudo iptables -L # list nat table sudo iptables -L -t nat Redirect # Redirect locally sudo iptables -A OUTPUT -t nat -p tcp --src 127.0.0.1 --dport 80 -j REDIRECT --to-port 8080 # Redirect in LAN sudo iptables -A PREROUTING -t nat -p tcp --src 10.42.0.0/24 --dst 10.42.0.1 --dport 80 -j REDIRECT --to-port 8080 Filter # Reject with ICMP-port-unreachable. sudo iptables -A OUTPUT --dst www.bing.com -j REJECT # Drop and hang up the connection. sudo iptables -A OUTPUT --dst www.bing.com -j DROP Package flow paths Package flow paths (from iptables - Wikipedia):
Substitution, dirname, basename and suffix Substitution can be used to get path and short filename. filename=a/b/c/name.file echo ${filename#*/} # b/c/name.file echo ${filename##*/} # name.file echo ${filename%/*} # a/b/c echo ${filename%%/*} # a But in fact, there is better way: filename=a/b/c/name.file echo $(dirname $filename) echo $(basename $filename) Still, substitution is useful for getting filename without suffix or getting suffix. filename=file.name.type echo ${filename%.*} # file.name echo ${filename##*.} # type Parameter Expansion String selection: string=1234567890abcdefg echo ${string: 10} # abcdefg echo ${string: 10:5} # abcde echo ${string: 7:-4} # 890abc Array selection: arr=(0 1 2) arr=($
Input and Output $ ffmpeg -i input.mp4 output.mp4 Cutting Video or Audio Begin and length: $ ffmpeg -i input.mp4 -ss 00:00:10 -t 01:00:10 output.mp4 Beginning time and ending time: $ ffmpeg -i input.mp4 -ss 01:00 -to 10:00 -c copy output.mp4 "-c" is short for "-codec". Use "-codec:a" or "-codec:v" to specify audio or viedo respectively. "-c copy" suggests that the data will be copy directly to output (rather than being converted in some cases). Video Converting $ ffmpeg -i input1.flv input2.flv -framerate 30 -f mp4 -vf "scale=1280x720" output.mp4 $ ffmpeg -i input1.flv input2.flv -framerate 30 -f mp4 -vf "crop=${w}:${h}:${x}:${y}" output.mp4 "-framerate 30" fixes the framerate to 30 fps. "-f mp4" before
0. install GPU drivers sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update sudo apt install nvidia-390 1. install cuda tookit and cudnn SDK (and CUPTI) Following the instructions at Installing Tensorflow on Ubuntu. # Adds NVIDIA package repository. sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_9.1.85-1_amd64.deb wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu1604_9.1.85-1_amd64.deb sudo dpkg -i nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb sudo apt-get update # Includes optional NCCL 2.x. sudo apt-get install cuda9.0 cuda-cublas-9-0 cuda-cufft-9-0 cuda-curand-9-0 \ cuda-cusolver-9-0 cuda-cusparse-9-0 libcudnn7=7.1.4.18-1
Transformation Defines a struct ObjectProperty as follow: property type comment rotation vec3 The object's rotation in its object center. scale vec4 / vec3 The object's scale level in its object center. translation vec3 The object's relative position to the origin (absolute position). face vec3 Where the object look at. The code: struct ObjectProperty { vec3 rotation; vec4 scale; vec3 translation; vec3 face; }; Object Transformation Object Transformation Order: -> Rotate -> Scale -> Translate Rotate and scale around the center of the object, so that the order of these transformations will not influence the result. The transformation matrix is: struct ObjectProperty objProp; mat4 m4ModelMatrix = translate(objProp.translation) * scale(objProp.scale) * rotate(objProp.rotation); The transformation operations are: vec4 v4PosInModel = m4ModelMatrix * vertex.position; Where vertex.position is a vec4 object containing the position of the
以下皆为在开发我的 C++ 库 donnylib 的时候遇到的问题。 (一天遇到这么多 weird problems 也是很走运了) donnylib : https://github.com/Donny-Hikari/donnylib Weird Template Subclass 如下的程序会导致无法自动推断模板类型。(编译环境: g++ 5.4.0, -std=c++17) template<class T> struct FOO { struct NOT { NOT() { } }; FOO() { sizeof(T); } }; template<class T> typename FOO<T>::NOT foo(typename FOO<T>::NOT a) { return a; } void test1() { FOO<int>::NOT f; foo(f); // template argument deduction/substitution failed. } 加上下面的代码也是不行的。 template<class T> using NOT = typename FOO<T>::NOT; template<class T> NOT<T> foo(NOT<T
Android Library To build a android library project, simply change the following line in build.gradle (Module): apply plugin: 'com.android.application' to: apply plugin: 'com.android.library' To pack the library into a jar file, add these lines to build.gradle (Module): def JarLibName="YourLibName" task clearJar(type: Delete) { delete 'build/generated/' + JarLibName + '.jar' } task makeJar(type: Jar) { from zipTree(file('build/intermediates/bundles/release/classes.jar')) baseName = JarLibName destinationDir = file("build/generated") } makeJar.dependsOn(clearJar, build) And run the following command in the Terminal: ./gradlew makeJar If failed in lint process, add these to build.gradle (Module): android { // ... lintOptions { abortOnError false } // ... } Remember to copy the jar library file

2017

1

最近(上个月)想研究一下人脸识别。人脸检测。找了下网上的资料,有不少都是介绍怎么用现成的模块识别的。但是我想了解的是用神经网络进行人脸识别,并且希望能够更多地接触神经网络。于是往基于TensorFlow框架的人脸识别方面查找了下,幸运地找到了 Hironsan 的 BossSensor 项目。学习他的代码,我了解到了 Keras 框架搭建神经网络的步骤。在此基础上进行了一系列改进,获得了更高的精度。 Main Produce 预处理 -> 数据集划分 -> 人脸图像输入 -> 卷积神经网络 -> 分类输出 -> 决策 Preprocess 首先,获取人脸图像数据。我是通过 OpenCV 采集照片中的人脸。照片来源:现场取材,身边人下手,手机照片,网络收集。(直到我做完才想起来有个 ImageNet 这种方便的东西。) 然后是读取图像数据,将人脸图像预处理,将图片缩放和增加 Padding, 使得图片的长宽像素一致