<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Pete's Home Page on the World Wide Web!!</title>
<link>https://peteftw.com/~pete/</link>
<description>Thoughts that need a more permanent home.</description>
<lastBuildDate>Mon, 20 Apr 2026 23:41:21 +0000</lastBuildDate>
<atom:link href="https://peteftw.com/~pete/rss.xml" rel="self" type="application/rss+xml"/>
<item>
<title>The Open Internet Needs Users</title>
<link>https://peteftw.com/~pete/2026/04/the-open-internet.html</link>
<guid isPermaLink="true">https://peteftw.com/~pete/2026/04/the-open-internet.html</guid>
<pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
<description><![CDATA[
<h1>The Open Internet Needs Users</h1>
<hr>
<p>
When I was a teenager in the 1990s the internet felt like a secret that not enough people
knew about yet. I was a curious kid, interested in how things worked, and I was completely
enamored by it. I found <a href="https://en.wikipedia.org/wiki/Linux">Linux</a> -- specifically
<a href="https://en.wikipedia.org/wiki/Slackware">Slackware</a> -- and suddenly I had a
working installation on used hardware from Goodwill. Eventually I got a mail server running for my friends so we could have our own email addresses
on our own domain. Before that we shared a single ISP address with our parents. Having
something that was ours felt genuinely powerful. We owned a small piece of the internet.
Nobody could take it away from us or change the terms on us.
</p>
<p>
That experience is a big part of why I still care about the open internet. I still run my own servers. I still want to build on open protocols. But I also recognize that I am not a typical user. Not everyone has the time, the curiosity, or the inclination to go down that path. Most people just want things to work. That is completely reasonable, and it is exactly the dynamic that closed platforms exploit.
</p>
<p>
So the question I keep coming back to is: how does the average person work towards maintaining an open internet? I do not have a complete answer. But I think it starts with understanding what is at stake.
</p>
<p>
The open internet has always survived on a simple condition: it has to be useful enough that ordinary people choose to use it. When it stops being the most convenient option, people leave. They've done it before, and they're doing it again now.
</p>
<p>
In the early 1990s, most Americans online weren't on the internet at all -- they were on <a href="https://en.wikipedia.org/wiki/AOL">AOL</a>. A closed, curated system with its own content, its own email, its own chat. It was easy and it worked. What killed it wasn't a better closed system -- it was the open web becoming genuinely useful. Companies started publishing their own websites. News, shopping, and communication moved onto the open internet and AOL became a middleman nobody needed.
</p>
<p>
For a while, the open web thrived. People published blogs. They followed each other through <a href="https://en.wikipedia.org/wiki/RSS">RSS</a> feeds -- a simple open standard that let you subscribe to any site and read everything in one place, without going to each site individually and without any platform in the middle deciding what you saw. <a href="https://en.wikipedia.org/wiki/Aaron_Swartz">Aaron Swartz</a>, who co-authored the RSS 1.0 spec at age 13, believed deeply that open standards like this were the foundation of a free internet. He spent much of his short life fighting for that idea. The culture he represented -- of building open things that anyone could use and nobody could own -- produced the best version of the web we have ever had. I miss it. I wish we were still there.
</p>
<p>
Then the smartphone arrived. The web didn't disappear, but the habits changed. Apps offered tighter experiences: faster, cleaner, with push notifications and native feel. The data behind them -- often just JSON talking to an API -- was the same information that a website could have delivered. But the app became the interface, and the interface became the product. <a href="https://en.wikipedia.org/wiki/Twitter">Twitter</a> progressively throttled its web experience to push people into the app. RSS readers were shut down -- Google killed <a href="https://en.wikipedia.org/wiki/Google_Reader">Google Reader</a> in 2013 and took a huge portion of the blog-reading public with it. The open URL became a second-class citizen.
</p>
<p>
This matters because apps are not the open internet. You cannot link deeply into most of them. You cannot index them. You cannot archive them. When a company shuts down or changes its terms, the content disappears. There is no <a href="https://web.archive.org">Wayback Machine</a> for an app's timeline. The information exists, but it is not yours to access.
</p>
<p>
For people who are not going to run their own servers, there are still meaningful choices. Some require no technical skill at all:
</p>
<ul>
<li>Publish things on URLs you control, not just on platforms.</li>
<li>Link to open web pages instead of deep-linking to apps.</li>
<li>Subscribe to RSS feeds and use an RSS reader instead of a social feed.</li>
<li>Choose services that expose a real website, not just an app-gate.</li>
<li>Archive things worth keeping at <a href="https://web.archive.org">archive.org</a>.</li>
<li>Pay for open services when you can. Free platforms survive by becoming the product.</li>
</ul>
<p>
But I am genuinely uncertain whether a list of habits is enough. The drift toward closed systems is not driven by malice -- it is driven by convenience, and convenience is hard to argue against. I want to understand how people who did not grow up tinkering with Linux can still have a real stake in keeping the web open. I do not think I have figured that out yet.
</p>
<p>
There are reasons to be hopeful. <a href="https://en.wikipedia.org/wiki/Mastodon_(social_network)">Mastodon</a> and the broader <a href="https://en.wikipedia.org/wiki/Fediverse">fediverse</a> are the most promising development I have seen in years. The underlying protocol, <a href="https://en.wikipedia.org/wiki/ActivityPub">ActivityPub</a>, is an open standard -- anyone can run a server, and servers talk to each other. No single company owns it. No terms of service can disappear your audience overnight. It is the RSS model applied to social networking, and it actually works. It is not perfect and it is not easy, but it exists and people use it.
</p>
<p>
What I really want to see -- and what I think would matter more than any individual user habit -- is companies coming back to the open web. Not just maintaining a presence on closed platforms, but building real websites again. A website is something you own. It has a URL. It can be linked to, indexed, archived, and read without an account. When a business publishes something on a closed platform, they are renting an audience from a landlord who can change the terms at any time. When they publish it on their own site, they are building something that belongs to them and to the web.
</p>
<p>
The open internet will persist as long as it remains the most useful place to be. That is not guaranteed. It is a thing people have to keep choosing -- and we need to make it easy enough for everyone to choose, not just the people who were already curious enough to go looking.
</p>
]]></description>
</item>
<item>
<title>Programming Is Creative</title>
<link>https://peteftw.com/~pete/2026/04/programming-is-creative.html</link>
<guid isPermaLink="true">https://peteftw.com/~pete/2026/04/programming-is-creative.html</guid>
<pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
<description><![CDATA[
<h1>Programming Is Creative</h1>
<hr>
<p>
I started programming at 14, though I did not know that is what I was doing at first. My
introduction was writing connection scripts for <a href="https://en.wikipedia.org/wiki/Bulletin_board_system">BBS</a>
sessions -- the kind of thing that would dial a number, log in, navigate menus, and grab
whatever I was after, all without me sitting there pressing keys. There was no documentation
for this. I figured it out by poking at it. Loops, variables, conditional logic -- I did not
have names for any of it yet, but I understood that I could describe a sequence of steps and
the computer would follow them. That was enough. There is something particular about that
first moment when you get a machine to do something new on its own. It does not matter how
small the thing is. The feeling is out of proportion to it.
</p>
<p>
At 15 I found a book on Borland C at a house where I was babysitting. The family was out,
the kid was asleep, and I sat there reading about compilers. Not what programs do -- what a
compiler is. The idea that you could write something in text, run it through a tool, and get
an executable the machine would run directly felt like a magic trick I needed to understand.
I began to write programs to parse my email inbox and create an HTML archive of the <a href="https://en.wikipedia.org/wiki/Pavement_(band)">"Pavement"</a>
discussion mailing list (my favourite band from the time). It was kludgy, and there were no
users, but I liked being able to read the emails remotely so it served its purpose for me.
</p>
<p>
At 16 I signed up for a night class in C programming. Somewhere in that same period I found
Linux, which changed everything. Suddenly I had a shell, and the shell had BASH, and BASH
had shell scripts, and there was Perl, and there were more open source examples than I could
have imagined. The concept that your shell could also be used to REPL develop took a minute
to sink in, but it was such a lovely experience writing little shell scripts that could
immediately fire. I started experimenting with <code>procmail</code> on my mail server to
pipe mail to shell scripts which would take action on them and do little tasks for me. Now
I could email myself and have my computer execute jobs.
</p>
<p>
That feeling has always been at the center of what I love about this work. Not just writing
code, but connecting things. A script that pulls data from one system and pushes it into
another. A config file that finally resolves and suddenly two pieces of software that had no
reason to know about each other are cooperating. Different hosts, different operating systems,
different protocols -- figuring out how to make them all speak the same language has always
been where the pleasure lives for me. There is a specific dopamine hit that comes from getting
the last piece of configuration right and watching a service come up clean. I do not think
everyone feels it, but the people who do tend to become programmers.
</p>
<p>
There is also a creative side to this that I think gets undersold. Writing software that
exactly fills a need -- not over-engineered, not a framework where a function would do, just
precisely the right thing for the problem -- feels genuinely artistic to me. The satisfaction
is similar to what I imagine a craftsperson feels when a joint fits perfectly. You knew what
you wanted, you understood the material, and you made the thing. That is not a mechanical
process. It takes taste.
</p>
<p>
I have been using LLMs in my work for the past couple of months. They are effective. They
solve problems. They have made me faster in measurable ways -- I can prototype something in
an afternoon that would have taken a day and a half before, and I can navigate unfamiliar
codebases more quickly than I used to. The business case is obvious and I am not going to
argue against it.
</p>
<p>
But I have noticed something I am still trying to articulate. The enjoyment is gone. Not the
satisfaction of the outcome -- the outcome is often fine. But the process, which used to be
where the pleasure lived, has become something I hand off. The config problem I would have
spent an hour on, iterating and learning why each thing failed, I now describe to a model and
receive an answer. The small elegant function that I would have revised three times to get the
shape right I now accept in a first draft that is good enough. It works. I just did not make it.
</p>
<p>
I am not sure this is the model's fault exactly. It is more like -- the opportunity for the
experience I value keeps getting preempted. The puzzle gets handed over before I get to feel
the resistance.
</p>
<p>
What worries me more than my own enjoyment is the industry. The people who become excellent
programmers are, in my experience, mostly people who started out in love with the puzzle.
They stayed up late because they wanted to know how it worked, not because someone assigned
them a ticket. If the puzzle keeps getting solved before anyone gets to fall in love with it,
I do not know where the next generation of people who genuinely understand this stuff comes
from. You can use a tool you do not understand for a long time. Until you cannot.
</p>
<p>
I do not have a solution for this. I am not even sure it is a solvable problem -- economic
incentives are what they are, and velocity matters. But I find myself increasingly making a
deliberate choice to sit with a problem for a while before reaching for the tool. Not because
it is more efficient. Because the part that used to feel like mine keeps slipping away, and
I am not ready to let it go entirely.
</p>
]]></description>
</item>
<item>
<title>Porting a C++20 Finger Daemon to FreeBSD</title>
<link>https://peteftw.com/~pete/2026/04/porting-finger-to-freebsd.html</link>
<guid isPermaLink="true">https://peteftw.com/~pete/2026/04/porting-finger-to-freebsd.html</guid>
<pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
<description><![CDATA[
<h1>Porting a C++20 Finger Daemon to FreeBSD</h1>
<i>April 20, 2026</i>

<p>
I wrote an updated finger daemon in modern c++ several months ago to run on a different domain of mine. That server is running docker on top of Debian, so I'd written it with an aim for that platform. With my recent move to operate more internet services from the 90s, I wanted to port my service over to FreeBSD and get it running inside of a jail on my main BSD server.

<h2>The Build System Problem</h2>

<p>
The project uses meson. The meson.build for the main executable had these link args:
</p>

<pre><code>link_args : ['-static', '-static-libgcc', '-static-libstdc++'],</code></pre>

<p>
Those are GCC-isms. FreeBSD ships clang, and clang doesn't have <code>-static-libgcc</code>
or <code>-static-libstdc++</code> -- it uses libc++ and handles its runtime differently.
Trying to build with those flags just dies at link time. The fix was simple: remove the
whole <code>link_args</code> line. We don't need a statically linked binary inside a jail
anyway.
</p>

<h2>Boost ASIO and pthreads</h2>

<p>
After stripping the bad flags the linker still failed, this time with a pile of undefined
symbols:
</p>

<pre><code>ld: error: undefined symbol: pthread_condattr_init
>>> referenced by posix_event.ipp:41
ld: error: undefined symbol: pthread_create
>>> referenced by posix_thread.ipp:60</code></pre>

<p>
Boost ASIO uses pthreads internally for its event loop. On Linux with GCC and the old
static flags, this got pulled in implicitly. On FreeBSD with clang it does not -- you have
to ask for it explicitly. The boost-libs package even tells you this in its install message:
"Don't forget to add -pthread to your linker options when linking your code." I missed it
the first time.
</p>

<p>
The right way to do this in meson is a threads dependency:
</p>

<pre><code>threads_dep = dependency('threads')</code></pre>

<p>
Then add <code>threads_dep</code> to the dependencies list for every build target. The final
meson.build for the main executable looks like:
</p>

<pre><code>executable('finger',
  'main.cpp','handler.cpp',
  dependencies : [boost_dep, threads_dep],
  install : true)</code></pre>

<p>
After that, clean configure and compile:
</p>

<pre><code>meson setup builddir
meson compile -C builddir
meson install -C builddir</code></pre>

<p>
Binary ends up at <code>/usr/local/bin/finger</code>. The C++ source itself needed zero
changes -- coroutines, std::filesystem, ASIO, all of it compiled clean under clang 19.
</p>

<h2>The Jail</h2>

<p>
I created a thin Bastille jail called <code>finger</code> at <code>192.168.1.104</code>.
The daemon reads plan files from <code>/var/finger/users/</code> -- one file per username,
contents returned verbatim. Rather than manage those files from inside the jail, I put them
on the host at <code>/srv/finger/users/</code> and nullfs bind-mounted that directory in:
</p>

<pre><code>/srv/finger/users  /usr/local/bastille/jails/finger/root/var/finger/users  nullfs  rw  0  0</code></pre>

<p>
That goes in the jail's <code>fstab</code>. Now I can add or edit plan files from the host
without touching the jail at all:
</p>

<pre><code>echo "Just another hacker." &gt; /srv/finger/users/pete</code></pre>

<p>
Port 79 forwarding is handled by a one-liner in Bastille's <code>rdr.conf</code>:
</p>

<pre><code>dual re0 any any tcp 79 79</code></pre>

<p>
Bastille translates that into a pf rdr rule on reload.
</p>

<h2>The Service</h2>

<p>
The daemon runs in the foreground -- no self-daemonizing. FreeBSD's <code>daemon(8)</code>
handles that. The rc.d script is minimal:
</p>

<pre><code>#!/bin/sh
# PROVIDE: fingerd
# REQUIRE: NETWORKING
# KEYWORD: shutdown

. /etc/rc.subr

name="fingerd"
rcvar="fingerd_enable"
command="/usr/sbin/daemon"
command_args="-f -p /var/run/fingerd.pid /usr/local/bin/finger"
pidfile="/var/run/fingerd.pid"

load_rc_config $name
: ${fingerd_enable:=NO}

run_rc_command "$1"</code></pre>

<p>
Drop that in <code>/usr/local/etc/rc.d/fingerd</code>, <code>chmod 755</code> it, then:
</p>

<pre><code>sysrc fingerd_enable=YES
service fingerd start</code></pre>

<h2>Result</h2>

<p>
The whole thing works. The C++ code was perfectly portable -- no Linux assumptions baked in.
The only friction was the build system carrying GCC baggage and the implicit pthread link
that Linux let slide. Two changes to meson.build, a jail, a bind mount, and an rc script.
</p>

<pre><code>$ finger pete@peteftw.com
Just another hacker.</code></pre>

<hr>
<a href="/~pete/">back</a>
]]></description>
</item>
<item>
<title>Hello World</title>
<link>https://peteftw.com/~pete/2026/04/hello-world.html</link>
<guid isPermaLink="true">https://peteftw.com/~pete/2026/04/hello-world.html</guid>
<pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
<description><![CDATA[
<h1>Hello World</h1>
<hr>
<p>
This is the start of some static blogging. I didn't want to use something like hugo to maintain a static, site, but rather write all of the html myself. Yet, I still wanted indexing, so I've written a little perl script to index all of the pages to make discovery easier.
</p>
<p>
I considered <a href="https://en.wikipedia.org/wiki/RSS">RSS</a>, but that wasn't invented until 1999, which is a little later than my initial introduction to writing homepages etc (1994).
</p>
<p><em>Edit:</em> I have since added an <a href="https://peteftw.com/~pete/rss.xml">RSS feed</a>. The open internet deserves it.</p>
]]></description>
</item>
</channel>
</rss>
