Rewrite to static site

With a complete reorganisation of the directory structure and most of
the content converted to pandoc-flavoured markdown.

Some TODO's left before this can go live:
- Main page
- Atom feeds
- Bug tracker
This commit is contained in:
Yorhel 2019-03-23 11:52:08 +01:00
parent 5c85a7d32f
commit 6242b2ee9c
291 changed files with 4346 additions and 6141 deletions

View file

@ -1,75 +0,0 @@
=pod
I write a lot of miscellaneous little perl/shell scripts and micro-libraries
for the purpose of getting something done. This page is a listing of those I
thought might be of useful to others as well.
I also maintain a collection of miscellaneous C micro-libraries. Those are
listed under the collective name of L<Ylib|https://dev.yorhel.nl/ylib>.
=head2 maildir.pl
October 2012. A tiny weechat plugin to display the number of unread emails in a
local Maildir. L<Latest
source|http://www.weechat.org/scripts/source/stable/maildir.pl.html/>
(L<1.0|http://p.blicky.net/wzbzs>).
=head2 ncdc-share-report
December 2011. Playing around with the Go programming language, I wrote another
transfer log parser and statistics generator for ncdc.
L<Example output|http://s.blicky.net/2012/ncdc-share-report.html>.
Download: L<0.3|http://p.blicky.net/h25z8>
(L<0.2|http://p.blicky.net/6yx2d>, L<0.1|http://p.blicky.net/ab4lm>).
=head2 ncdc-transfer-stats
September 2011. L<ncdc|https://dev.yorhel.nl/ncdc> gained transfer logging
features, and I wrote a quick Perl script to fetch some simple statistics from
it. L<source|https://p.blicky.net/4V9Kg59kUJUN> (L<0.2|http://p.blicky.net/eu00a>, L<0.1|http://p.blicky.net/agolr>).
=head2 json.mll
December 2010. I was writing a client for the L<public VNDB
API|http://vndb.org/d11> in OCaml and needed a JSON parser/generator. Since I
wasn't happy with the currently available solutions - they try to do too many
things and have too many dependencies - I decided to write a minimal JSON
library myself. L<source|http://g.blicky.net/serika.git/tree/json.mll>
=head2 vinfo.c
November 2009. The L<public VNDB API|http://vndb.org/d11> was designed to be
easy to use even from low level languages. I wrote this simple program to see
how much work it would be to use the API in C, and as example code for anyone
wishing to use the API for something more useful. Read the comments for more
info. L<source|https://dev.yorhel.nl/download/code/vinfo.c>
=head2 Microdc2 log file parser
June 2007. Simple perl script that parses log files created by
L<microdc2|http://corsair626.no-ip.org/microdc/> and outputs a simple and
ugly html file with all uploaded files. It correctly merges chunked
uploads, calculates average upload speed per file and total bandwidth used
for uploads. L<source|https://dev.yorhel.nl/download/code/mdc2-parse.pl>
B<Note:> for those of you who still use microdc2, please have a look at
L<ncdc|https://dev.yorhel.nl/ncdc>, a modern alternative.
=head2 yapong.c
Feburary 2006. Yet Another Pong, and yet another program written just for
testing/ learning purposes. Tested to work with the ncurses or pdcurses
libraries. L<source|https://dev.yorhel.nl/download/code/yapong.c> (L<older
version|https://dev.yorhel.nl/download/code/yapong-0.01.c>).
=head2 echoserv.c
February 2006. A simple non-blocking single-threaded TCP echo server,
displaying how the select() system call can be used to handle multiple
connections. L<source|https://dev.yorhel.nl/download/code/echoserv.c>
=head2 bbcode.c
January 2006. Simple BBCode to HTML converter written in plain C, for learning
puroses. L<source|https://dev.yorhel.nl/download/code/bbcode.c>

99
dat/dump/awshrink.md Normal file
View file

@ -0,0 +1,99 @@
% AWStats Data File Shrinker
People who run AWStats on large log files have most likely noticed: the data
files can grow quite large, resulting in both a waste of disk space and longer
page generation times for the AWStats pages. I wrote a small script that
analyzes these data files and can remove any information you think is
unnecessary.
**Download:** [awshrink](/download/code/awshrink) (copy to /usr/bin to
install).
## Important
Do **NOT** use this script on data files that are not completed yet (i.e. data
files of the month you're living in). This will result in inaccurate sorting of
visits, pages, referers and whatever other list you're shrinking. Also, keep
in mind that this is just a fast written perl hack, it is by no means fast and
may hog some memory while shrinking data files.
## Usage
awshrink [-c -s] [-SECTION LINES] [..] datafile
-s Show statistics
-c Overwrite datafile instead of writing to a backupfile (datafile~)
-SECTION LINES
Shrink the selected SECTION to LINES lines. (See example below)
## Typical command-line usage
While awshrink is most useful for monthly cron jobs, here's an example of basic
command line usage to demonstrate what the script can do:
$ wc -c awstats122007.a.txt
29916817 awstats122007.a.txt
$ awshrink -s awstats122007.a.txt
Section Size (Bytes) Lines
SCREENSIZE* 74 0
WORMS 131 0
EMAILRECEIVER 135 0
EMAILSENDER 143 0
CLUSTER* 144 0
LOGIN 155 0
ORIGIN* 178 6
ERRORS* 229 10
SESSION* 236 7
FILETYPES* 340 12
MISC* 341 10
GENERAL* 362 8
OS* 414 29
SEREFERRALS 587 34
TIME* 1270 24
DAY* 1293 31
ROBOT 1644 40
BROWSER 1992 127
DOMAIN 2377 131
UNKNOWNREFERERBROWSER 5439 105
UNKNOWNREFERER 20585 317
SIDER_404 74717 2199
PAGEREFS 130982 2500
KEYWORDS 288189 27036
SIDER 1058723 25470
SEARCHWORDS 5038611 157807
VISITOR 23285662 416084
* = not shrinkable
$ awshrink -s -c -VISITOR 100 -SEARCHWORDS 100 -SIDER 100 awstats122007.a.txt
Section Size (Bytes) Lines
SCREENSIZE* 74 0
WORMS 131 0
EMAILRECEIVER 135 0
EMAILSENDER 143 0
CLUSTER* 144 0
LOGIN 155 0
ORIGIN* 178 6
ERRORS* 229 10
SESSION* 236 7
FILETYPES* 340 12
MISC* 341 10
GENERAL* 362 8
OS* 414 29
SEREFERRALS 587 34
TIME* 1270 24
DAY* 1293 31
ROBOT 1644 40
BROWSER 1992 127
SEARCHWORDS 2289 100
DOMAIN 2377 131
SIDER 3984 100
UNKNOWNREFERERBROWSER 5439 105
VISITOR 5980 100
UNKNOWNREFERER 20585 317
SIDER_404 74717 2199
PAGEREFS 130982 2500
KEYWORDS 288189 27036
* = not shrinkable
$ wc -c awstats122007.a.txt
546074 awstats122007.a.txt

37
dat/dump/btrfssize.md Normal file
View file

@ -0,0 +1,37 @@
% btrfs-size.pl
_2016-08-16_ - btrfs-size.pl is a quick little script to provide an overview of
the disk space used by btrfs subvolumes. It's comparable to
[btrfs-size.sh](https://poisonpacket.wordpress.com/2015/05/26/btrfs-snapshot-size-disk-usage/),
but is somewhat faster and has a few options to sort the output.
Honestly, it's still hard to draw any conclusions from the sizes provided by
btrfs, but sadly, [ncdu](/ncdu) is useless for snapshot-heavy filesystems.
Only tested with btrfs-progs v4.4.1.
**Download:** [btrfs-size.pl](https://p.blicky.net/FNPXpbwMXfTI.txt)
([syntax-highligted version](https://p.blicky.net/FNPXpbwMXfTI)).
## Usage
btrfs-size.pl --help [-nser] <path>
-n Order by path name
-s Order by (total) subvolume size
-e Order by exclusive subvolume size
-r Reverse order
## Example output
# btrfs-size.pl /data
gfbf007/cur 46.32 GiB 16.00 KiB
gfbf007/snap/2016-08-14.08 46.32 GiB 428.00 KiB
gfbf007/snap/2016-08-15.03 46.32 GiB 428.00 KiB
gfbf007/snap/2016-08-16.03 46.32 GiB 16.00 KiB
ggit011/cur 23.92 MiB 16.00 KiB
ggit011/snap/2016-08-14.08 23.90 MiB 300.00 KiB
ggit011/snap/2016-08-15.08 23.92 MiB 16.00 KiB
gman015/cur 3.74 GiB 16.00 KiB
gman015/snap/2016-08-14.08 3.74 GiB 112.00 KiB
gman015/snap/2016-08-15.02 3.74 GiB 96.00 KiB
gman015/snap/2016-08-16.02 3.74 GiB 16.00 KiB

31
dat/dump/demo.md Normal file
View file

@ -0,0 +1,31 @@
% Demos
Yes, I realise that the title is plural, suggesting there's more than one demo.
That is not quite true, unfortunately. The reason I chose to use plural form is
simply in the hopes that I do, in fact, write more demos, and that this page
will actually get more content in the future. I still happen to be a huge fan
of the [demoscene](http://demoscene.info/), and still wish to contribute to
it... if only I could find the time and self-discipline to do so. In the
meanwhile, here's one demo I did write some time ago.
*(2019 update: Don't get your hopes up, I likely won't ever write another demo.
I don't have the patience for it, I guess.)*
# Blue Cubes
![Blue Cubes.](/img/bluecubes.png){.right}
August 2006. My first demo - or more exact: intro. Blue Cubes is a 64kB intro
written in OpenGL/SDL with Linux as target OS. I wrote this intro within 10
days without any prior experience in any of the fields of computer generated
graphics or music. So needlessly to say, it sucks. I am ashamed even of the
thought of releasing it at a respectable demoparty like
[Evoke](https://www.evoke.eu/2006/). Still, it didn't feel I was unwelcome, I
did actually receive three prices: 3rd price in the 64k competition (there were
only 3 actual entries, but oh well), best non-windows 64k intro (it was the
only one in the competition), and the Digitale Kultur newcomer award, which
actually is something to be proud of, I guess.
[download](/download/yorhel~bluecubes.zip) -
[mirror](https://scene.org/file.php?file=/parties/2006/evoke06/in64/yorhel_bluecubes.zip&fileinfo)
(includes linux binaries, windows port, and sources) -
[pouet comments](https://pouet.net/prod.php?which=25866).

44
dat/dump/grenamr.md Normal file
View file

@ -0,0 +1,44 @@
% GTK+ Mass File Renamer
GRenamR is a GTK+ mass file renamer written in Perl, the functionality is
insipred by the
[rename](https://search.cpan.org/~rmbarker/File-Rename-0.05/rename.PL) command
that comes with a Perl module.
GRenamR allows multiple file renaming using perl expressions. You can see the
effects of your expression while typing it, and can preview your action before
applying them. The accepted expressions are mostly the same as the rename
command (see above paragrah): your expression will be evaluated with `$_` set
to the filename, and any modifications to this variable will result in the
renaming of the file. There's one other variable that the rename command does
not have: `$i`, which reflects the file number (starting from 0) in the current
list. This allows expressions such as as `$_=sprintf'%03d.txt',$i`.
**Download:** [grenamr](/download/code/grenamr-0.1.pl)
(copy to /usr/bin/ to install)
Requires the Gtk2 Perl module. Most distributions have a perl-gtk2 package.
## Example expressions
y/A-Z/a-z/ # Convert filenames to lowercase
$_=lc # Same
s/\.txt$/.utf8/ # Change all '.txt' extensions to '.utf8'
s/([0-9]+)/sprintf'%04d',$1/eg # Zero-pad all numbers in filenames
# Replace each image filename with a zero-padded number starting from 1
s/^.+\.jpg$/sprintf'%03d.jpg',$i+1/e
## Caveats / bugs / TODO
- Calling functions as 'sleep' or 'exit' in the expression will trash the program
- It's currently not possible to manually order the file list, so $i is
not useful in every situation
- It's currently not possible to manually rename files or exclude items
from being effected by the expression
- The expression isn't executed in the opened directory, so things like
[-X](https://perldoc.perl.org/functions/-X.html) won't work
## Screenshot
![GRenamR screenshot](/img/grenamr.png){.scr}

81
dat/dump/insbench.md Normal file
View file

@ -0,0 +1,81 @@
% Insertion Performance Benchmarks
_2013-07-05_ - One of my favourite data structures in C is the ordered vector
(or array, whatever you call them). Incredibly simple to implement, very low
memory overhead, and can provide O(log n) lookup with a simple binary search.
However, ordered vectors have one very weak point: insertion and deletion of
items is O(n). For small n that doesn't really matter, but if the number of
items in the list can grow a bit, you may run into performance issues. If
you're not careful, this could even turn your ordered vector into an attack
vector (apologies for the terrible pun).
My goal with this benchmark is to get a feeling on how, exactly, insertion
performance behaves with an ordered vector. What values of n are "small"? And
how much worse does insertion performance get compared to more complex data
structures?
For comparison, I chose the B-tree and hash table implementations from
[klib](https://github.com/attractivechaos/klib) (from commit fff70758, to be
precise). My goal wasn't to benchmark the performance of different
implementations, so I simply chose two implementations that I suspect are among
the fastest. The vector implementation in the benchmarks is my own creation:
[vec.h](https://g.blicky.net/globster.git/tree/src/util/vec.h?id=2c11d2a) from
the [Globster](/globster) code base.
**Source code:** [ins-bench.c](https://p.blicky.net/r746e)
## Best case & worst case
For a start, I decided to benchmark the best and worst case performance of
inserting elements into a vector. The best case happens when inserting all
items at the end of the vector, the worst case when inserting them in front.
The B-tree and hash table benchmarks provided for comparison have all items
inserted in order.
I'm cheating here with the vector implementation, because all elements are
inserted in the list without first finding out the position with a binary
search. Actual performance will be thus be a bit worse, depending on whether
the final application needs that binary search or whether it can assume its
input to be already sorted.
[ ![Benchmark results](/img/insbench-bench-thumb.png) ](/img/insbench-bench.png)
Gnuplot script: (The awk(ward) part can likely be done natively in gnuplot as
well, but I was too lazy to figure out how)
set terminal png size 1000, 1500
set output "bench.png"
set logscale xy
set xlabel "number of items"
set ylabel "average time per insert (ms)"
set grid mxtics xtics mytics ytics
plot "< awk '{print $1, $2/$1*1000}' bench-vec" title 'vector, worst case',\
"< awk '{print $1, $2/$1*1000}' bench-best" title 'vector, best case',\
"< awk '{print $1, $2/$1*1000}' bench-hash" title 'khash',\
"< awk '{print $1, $2/$1*1000}' bench-btree" title 'kbtree'
## Average case
For the second benchmark I inserted values created with `rand()`, which should
be a more accurate simulation of some real-world applications. This time I'm
not cheating with the vector implementation, a binary search is performed in
order to insert the items in the correct location.
[ ![Benchmark results](/img/insbench-rand-thumb.png) ](/img/insbench-rand.png)
set terminal png size 1000, 1500
set output "bench-rand.png"
set logscale xy
set xlabel "number of items"
set ylabel "average time per insert (ms)"
set grid mxtics xtics mytics ytics
plot "< awk '{print $1, $2/$1*1000}' rand-vec" title 'vector',\
"< awk '{print $1, $2/$1*1000}' rand-hash" title 'khash',\
"< awk '{print $1, $2/$1*1000}' rand-btree" title 'kbtree'
## Benchmarking setup
All benchmarks were performed on a 3 GHz Core Duo E8400 with a 6 MiB cache.
Compiled with the Gentoo-provided gcc 4.6.3 at -O3, linked against glibc 2.15,
and run on a Linux 3.8.13-gentoo kernel. Boring details, but somehow good to
document.

86
dat/dump/nccolour.md Normal file
View file

@ -0,0 +1,86 @@
% Colours in NCurses
I decided to do some experimentation with how the colours defined in ncurses
are actually displayed in terminals, what the effects are of combining these
colours with other attributes, and how colour schemes of a terminal can affect
the displayed colours. To this end I wrote a small c file and ran it in
different terminals and different configurations. Note that only the 8 basic
NCurses colours are tested, the more flexible init\_color() function is not
used.
**Source code:** [nccolour.c](/download/code/nccolour.c)
([syntax highlighed version](http://p.blicky.net/xu35c))
## Notes / observations
- The most obvious conclusion: the displayed colours do not have the exact same
colour value in every terminal. Some terminals also allow users to modify
these colours.
- You can not assume that the default foreground or background colour can be
represented by one of the 8 basic colours defined by NCurses.
- Specifying -1 as colour, to indicate the default foreground or background
colour, seems to work fine in any terminal tested so far.
- All tested terminals render the foreground colour in a lighter shade when the
A\_BOLD attribute is set. This does not apply to the background colour. The
result of this is that the text becomes visible when using A\_BOLD when the
foreground and background colour are set to the same value.
- Unfortunately, not all terminals are configured in such a way that all
possible colours are readable. So as a developer you'll still have to support
configurable colour schemes in your ncurses application. :-(
- On most terminals, setting the foreground and background colour to the same
value without applying the A\_BOLD attribute will make the text invisible.
Don't rely on this, however, as this is not the case on OS X.
## Full screenshot
To avoid wasting unecessary space, the comparison screenshots below only
display the colour table. Here's a screenshot of the full output of the
program, which also explains what each column means.
![Full screenshot](/img/nccol-full.png)
## Screenshots
Arch Linux, Roxterm, Default color scheme
![](/img/nccol-rox-b.png)
Arch Linux, Roxterm, GTK color scheme
![](/img/nccol-rox-w.png)
Arch Linux, Roxterm, Tango color scheme
![](/img/nccol-rox-t.png)
Arch Linux, Roxterm, Modified Tango color scheme
![](/img/nccol-rox-c.png)
Arch Linux, xterm (default settings)
![](/img/nccol-xterm.png)
Ubuntu 11.10, Gnome-terminal
![](/img/nccol-ubuntu.png)
Debian Squeeze, VT (default settings)
![](/img/nccol-debian.png)
FreeBSD, VT (default settings)
![](/img/nccol-fbsd.png)
Mac OS X, Terminal
![](/img/nccol-osx-terminal.png)
Mac OS X, iTerm2
![](/img/nccol-osx-iterm2.png)
CentOS 6.4
![](/img/nccol-centos64.png)