more typos and spelling.

This commit is contained in:
Gabe Venberg 2024-07-30 23:58:49 -05:00
parent 6b72cc3812
commit e7d4442af6
8 changed files with 39 additions and 40 deletions

View file

@ -4,24 +4,24 @@ date = 2021-04-11T19:19:51-05:00
draft = false
+++
Ive been using Arch Linux for several years now.
I've been using Arch Linux for several years now.
Of course, my first installs were... blunderous, as i wanted to do full disk encryption from the get-go, and I didn't know what I was doing.
After those first one or two installs, I generally settled on LVM on LUKS with a GRUB bootloader and my swap on an LVM volume,
mostly because it makes it much easier to setup hibernation/suspend to disk vs, say, a swap file.
(with a swap file, you have to deal with file offsets, and I have never gotten a satisfactory answer as to whether its possible for the filesystem to just *move* a file to a different disk sector in the process of, say, defragging with a very full hard drive.)
With my newest laptop, I decided to try out btrfs, in large part due to its snapshot system and ability to transfer those snapshots over a network.
(Im hoping to make a lightweight filesystem backup using this, on top of the data-level backups I currently use.)
(I'm hoping to make a lightweight filesystem backup using this, on top of the data-level backups I currently use.)
However, suspend-to-disk is also quite important to me, and the Arch Wiki is really only clear on how to do that with unencrypted partitions, LVM on LUKS, and on swapfiles.
The archwiki has some info on how to do it for the encrypt hook with a custom mkinitcpio hook, or with sd-encrypt hooks by just specifying multiple devices, but I didn't want to be writing a ton of custom config for the encrypt hook, and the section on sd-encrypt was not very clear at all, so I decided to do some experimentation and write up what worked for me.
The Archwiki has some info on how to do it for the encrypt hook with a custom mkinitcpio hook, or with sd-encrypt hooks by just specifying multiple devices, but I didn't want to be writing a ton of custom config for the encrypt hook, and the section on sd-encrypt was not very clear at all, so I decided to do some experimentation and write up what worked for me.
## A note on security and risk profiles
The encryption schema I am setting up in this guide is only meant to protect your data from theft of your physical device when it is turned off or suspended to the disk.
Full disk encryption will not protect you from anything while you laptop is powered on. After boot, the encryption is completely transparent to userspace.
Also, I did am not encrypting the boot partition, and Im not setting up any sort of secure boot.
Also, I did am not encrypting the boot partition, and I'm not setting up any sort of secure boot.
This means that an attacker could hypothetically replace your boot partition or firmware and keylog your password, so if you suspect your computer has been tampered with, *don't* boot it up.
To reiterate, this setup by itself only protect your data if your powered down machine is stolen. It does not protect you data from being stolen in any scenario where your laptop is powered on during tampering or you log in after it has been tampered with.
@ -34,7 +34,7 @@ such as setting up a graphical environment.
Also, some of the middle steps require some modification depending on what sort of final setup you want, and your hardware.
I will call out those modifications in the relevant steps.
All this said, I would discourage you from blindly following this guide if its your first time installing arch (or a similarly diy distro like Gentoo).
All this said, I would discourage you from blindly following this guide if its your first time installing arch (or a similarly DIY distro like Gentoo).
You should clearly understand what most of these commands do before typing them in.
Anyway, start by booting up the arch ISO...
@ -42,7 +42,7 @@ Anyway, start by booting up the arch ISO...
## Installing via ssh
Sometimes, you don't want to be switching from the computer you are installing Linux on and the computer with the documentation and a search engine on it,
and Ive found the best way to avoid that is to set up a simple ssh session from the arch ISO to the computer with the documentation on it.
and I've found the best way to avoid that is to set up a simple ssh session from the arch ISO to the computer with the documentation on it.
⚠️ **WARNING:** On a normal, already installed machine, *NEVER* use just a password for SSH. *ESPECIALLY* if it is internet-facing or connected to a public network.
We are only doing this because we are (hopefully) on a personal network, and the password-based SSH session only exists on the Arch ISO, so as soon as you boot into your fresh system, the SSH session will be gone.

View file

@ -4,13 +4,13 @@ date = 2021-12-12T14:59:31-05:00
draft = false
+++
During the 5 or so years I've had nextcloud, I've always been quite happy with the webclient, but the device clients... need some work.
During the 5 or so years I've had Nextcloud, I've always been quite happy with the web client, but the device clients... need some work.
I recently figured out how to resolve one of my biggest pain points on the Linux desktop client, and am recording it here, mostly so I don't forget next time I setup a new computer,
and to save others with the same problem from endless forum post and GitHub issue crawling.
## The cause
Nextcloud expects the environment it is running in to have a 'keychain manager' installed and accessible by libsecret.
However, currently, the Arch Linux nextcloud package does not list libsecret nor any keychain manager as a dependency.
However, currently, the Arch Linux Nextcloud package does not list libsecret nor any keychain manager as a dependency.
This does not cause a problem if you are using a desktop environment, as they will come with one in their own dependency cloud, but if you are just using a window manager, you may very well not have one installed.
(as a side note, this also seems to cause a significant delay in the client starting up, probably some sort of timeout waiting to access the keyring

View file

@ -9,7 +9,7 @@ This is very easy to do in X11 with a setxkmap command.
However, with my laptop, I try to run without X as much as possible. (I've found it make a nice, distraction free environment, and it seems to be pretty good for battery life)
Obviously, without X, we cannot use setxkmap.
In order to do this without the tools in setxkbmap, we will have to edit the keymap used by the vitual console and set it as the keymap using localectl.
In order to do this without the tools in setxkbmap, we will have to edit the keymap used by the virtual console and set it as the keymap using localectl.
Now, according to the Arch Wiki, we should be able to create a file containing
@ -55,7 +55,7 @@ This means, that in order to correctly modify the keymap, we either have to defi
## Keymap patch
To continue overriding the default keymap, you can simply manually repeat the control command.
Now, technically, there are 256 columns in the keymap file, but, at least for latin keyboards, only the first 16 are used.
Now, technically, there are 256 columns in the keymap file, but, at least for Latin keyboards, only the first 16 are used.
As such, our keymap patch looks like:
{{<highlight console "linenos=false">}}

View file

@ -100,7 +100,7 @@ The majority of the foundational CLI tools on a Linux pc, even one installed yes
## Ok, so?
Now, theres nothing wrong with this.
Now, there's nothing wrong with this.
The tools still work fine, but in the half-century since they were first written,
terminals and the broader Linux ecosystem have all changed.
Terminals now have capacity to display more colours, Unicode symbols, and even inline images.
@ -110,7 +110,7 @@ whereas in the past, terminals were the only way one interacted with the compute
Perhaps more importantly, our knowledge has expanded:
our knowledge of user interfaces,
of what works and what doesnt,
of what works and what doesn't,
of what usecases are common and what usecases are niche,
the way that error messages can teach,
the value of a good out of the box experience,
@ -119,9 +119,9 @@ and the value of documentation that is easy to find and digest.
These changes to the environment surrounding CLI apps in recent years have
led to a resurgence in development of command line utilities.
Instead of just developing completely new tools or cloning old tools,
Ive noticed that people are rethinking and reinventing tools that have existed since the early days of Unix.
I've noticed that people are rethinking and reinventing tools that have existed since the early days of Unix.
This isnt just some compulsive need to rewrite every tool out there in your favorite language.
This isn't just some compulsive need to rewrite every tool out there in your favorite language.
People are looking at the problem these tools set out to solve,
and coming up with their own solutions to them,
exploring the space of possible solutions and taking new approaches.
@ -172,12 +172,12 @@ It has a few colours, shows everything the bash prompt does, and additionally sh
Text editors are another great example of the evolution of out of the box defaults.
Vim and Neovim both improved on their predecessors,
but much of that improvement is locked behind extremely complex configuration experiences and plugins.
Heres four different terminal text editors with no configuration applied:
Here's four different terminal text editors with no configuration applied:
![vi, vim, neovim, and helix editors in their default
configuration](editors.png)
Vi, (top left) is our baseline, and, as far as I can tell, doesnt actually
Vi, (top left) is our baseline, and, as far as I can tell, doesn't actually
support much for configuration. What you see out of the box is more or less
whats there.
@ -192,12 +192,12 @@ In order to take advantage of the LSP and Treesitter support, you have to instal
which means learning a Nvim package manager, learning how to configure LSPs,
and configuring a new LSP for every language you want to use it with
(or finding out about Mason and being OK with having multiple levels of package management in your Nvim install alone).
Dont get me wrong: Neovim is a great editor once you get over the hump.
Don't get me wrong: Neovim is a great editor once you get over the hump.
I still use it as my daily driver, but so much of its functionality is simply hidden.
Then we have the Helix (bottom right) editor.
Slightly glaring default colour scheme aside, everything is just *there*.
Helix doesnt have plugin support [yet](https://github.com/helix-editor/helix/discussions/3806),
Helix doesn't have plugin support [yet](https://github.com/helix-editor/helix/discussions/3806),
but it has so much stuff in core that,
looking through my neovim plugins,
pretty much all of them are in the core editor!
@ -216,19 +216,19 @@ but its an extremely usable IDE out of the box thanks to having all of its featu
In my nvim config, I use [which-key](https://github.com/folke/which-key.nvim),
a plugin that displays available keybindings as you type.
Ive been using vim for almost a decade, including a long time without which-key,
I've been using vim for almost a decade, including a long time without which-key,
so its not like I never learned the keybindings, but I still find which-key useful.
Why is that, you may ask?
Well, because even though I use (n)vim every day, I dont use all the keybindings every day.
Well, because even though I use (n)vim every day, I don't use all the keybindings every day.
I might go months between using, for example, `dap` (delete current paragraph), or `C-w x` (swap current window for next).
Naturally, when you go months without using certain parts of a program, you tend to forget they exist.
Which-key solves that handily, by offering quick, non-intrusive reminders of what is availible.
Heres what my which-key config looks like:
Which-key solves that handily, by offering quick, non-intrusive reminders of what is available.
Here's what my which-key config looks like:
![Which-key.nvim](nvim_which_key.png)
Now, which-key and its like have been around for a while,
but other TUI programs have integreated contextual hints without the need for a plugin.
but other TUI programs have integrated contextual hints without the need for a plugin.
The two that I am aware of are zellij and helix.
![Helix's contextual hint](helix_contextual_hint.png)
@ -237,7 +237,7 @@ Helix both has autocompletion for its built in command line and a contextual hin
This drastically helps both new and experienced users learn and remember keybinds without making the editor any less powerful.
Zellij has a bottom bar displaying keybindings available in the current mode.
This has proven invaluable for me, as I dont use a terminal multiplexer much
This has proven invaluable for me, as I don't use a terminal multiplexer much
(On GUI systems, I use the window manager for managing multiple terminals), and as such tend to forget the keybinds.
<!-- look at zellij and helix and their built in keymap cheatsheets-->
@ -247,17 +247,17 @@ This has proven invaluable for me, as I dont use a terminal multiplexer much
<!-- look at sd, rg, and fd-->
Where possible, documentation should not even be required for the most common use cases.
Whenever I want to use `find`, I almost always have to first look at the man page,
as I dont use it quite often enough to memorize it.
But thats totally unneeded! 90% of my uses of `find` take the form of `find ./ -name "*foo*"`.
as I don't use it quite often enough to memorize it.
But that's totally unneeded! 90% of my uses of `find` take the form of `find ./ -name "*foo*"`.
With [fd](https://github.com/sharkdp/fd), the exact same invocation is a simple `fd foo`, dead simple, no man page needed.
Of course, 10% of the time im doing something else and have to look at the manual even with fd,
Of course, 10% of the time I'm doing something else and have to look at the manual even with fd,
but the point is that manuals are for when you want to do something with the tool that is not the most common usecase.
There are many other examples as well. How many of your grep invocations are in the form of `grep -R 'foo' ./`?
Most of mine are. [Ripgrep](https://github.com/BurntSushi/ripgrep) shortens that to `rg foo`
while still having all the power of grep when I need it, and it is faster to boot!
This isnt to say that tools should 'dumb themselves down' or hobble themselves to make them easier to use.
This isn't to say that tools should 'dumb themselves down' or hobble themselves to make them easier to use.
However they should keep in mind the most common usecase that their tool is likely to be used in,
and streamline that usecase.
@ -279,7 +279,7 @@ and file modification time laziness
These features are *good* features when make is being used as a build system,
but another major use of make that has emerged has been as a way to run common tasks.
so alongside `make build` to buld your program, you would have `make bootstrap`, `make test`, `make config`, etc.
So alongside `make build` to build your program, you would have `make bootstrap`, `make test`, `make config`, etc.
This is where the design decisions behind make the build system start to hinder make the task runner,
making one learn about make the build system in order to work around those features to use make the task runner.
However, make cant drop these features, both because projects still actively use make as a build system,

View file

@ -87,7 +87,7 @@ In short, you can think of stow taking a folder, and symlinking the contents of
## Ok, how do I use this to manage my dotfiles?
So now you know how to stow operates, you can make a 'package' for every program you have dotfiles for.
Id encorage you to take a look at the directory structure of my dotfiles [repo](https://git.venberg.xyz/Gabe/dotfiles) if you want more examples of the directory structure you should aim for.
Id encourage you to take a look at the directory structure of my dotfiles [repo](https://git.venberg.xyz/Gabe/dotfiles) if you want more examples of the directory structure you should aim for.
Once you have the file structure down, all you need to install on a new machine is `git` and `stow`, git clone your dotfile repo, `cd` into it, and `stow` the folders for the software you want to install configs for.

View file

@ -13,7 +13,7 @@ The problem was, if I was going to get a new keyboard,
I wanted it to be for both the office and travel,
and most prebuilts around are not that portable.
I also was not confident enough in my soldering skills to solder the SMT diodes found on many handbuilt designs out there.
Eventually though, I stumbled upon the github page for the [Ferris Sweep](https://github.com/davidphilipbarr/Sweep).
Eventually though, I stumbled upon the Github page for the [Ferris Sweep](https://github.com/davidphilipbarr/Sweep).
## The Basic Build
@ -46,12 +46,12 @@ so you don't have to do this every time you reflash your keyboard)
## Layout
I was'nt feeling quite adventurous enough to switch away from qwerty,
I wasn't feeling quite adventurous enough to switch away from qwerty,
but, the Sweep being a 34-key board, some layout adjustment would be needed.
I took the Sweeps [default layout](https://github.com/qmk/qmk_firmware/tree/master/keyboards/ferris/keymaps/default)
and used the [QMK configurator](https://config.qmk.fm/) to customize it.
First, I moved space to my left thumb, as Im left handed.
First, I moved space to my left thumb, as I'm left handed.
I put esc on one of the thumb keys for usage in vim.
I moved the numpad layer to my right hand side, swapping its position with the function key layer.
I also put the meta key as a hold-mod on the lower pinky keys, as my window manager uses it for all its keybinds.
@ -59,7 +59,7 @@ I also put the meta key as a hold-mod on the lower pinky keys, as my window mana
The mod-tap home row layer changes actually feel really natural,
and the extra space afforded by layers allows me to organize things in a more natural feeling way,
such as putting the numbers in a numpad layout, rather than along the top.
Im not quite happy with my modifiers being mod-taps on the bottom row,
I'm not quite happy with my modifiers being mod-taps on the bottom row,
they can feel slightly awkward to reach,
and I may experiment with moving them around, potentially on the top row.
@ -105,7 +105,7 @@ and if they do, I did socket the microcontrollers for easy replacement.
## Conclusion
It took me all of a week to fall in love with the sweeps form factor,
and, 1 month later, Im convinced I will never let myself work on a regular keyboard for a long period of time again,
that's how much Ive come to appreciate split keyboards.
and, 1 month later, I'm convinced I will never let myself work on a regular keyboard for a long period of time again,
that's how much I've come to appreciate split keyboards.
The fact that the board has no pesky diodes or other surface mount parts means its very accessible first build,
and one Id recommend to anyone interested in improving their typing ergonomics.

View file

@ -44,7 +44,7 @@ Individual entries can be one of several built in datatypes,
including rich datatypes like datetimes, durations, and filesizes.
Nushell can also open many filetypes and turn them into nushell native datastructures to work with,
including csv's, json, toml, yaml, xml, sqlite files, and even excel and libreoffice calc spreadsheets.
including csv, json, toml, yaml, xml, sqlite files, and even excel and libreoffice calc spreadsheets.
Once you have your data in nushell datastructures,
you can do all sorts of manipulations on it.
@ -120,7 +120,6 @@ update time {into datetime -f '%d/%b/%Y:%T %z'} |
# parse into proper integer
update bytes_sent {into int}
{{</highlight>}}
(each line has a comment explaining what it does, for those unfamiliar with the nushell language)
Now that we have it in nushell tables, we can bring all of nushells tools to bear on the data.
For example, we could plot a histogram of the most common ips, just by piping the whole thing into `histogram ip`.
@ -156,7 +155,7 @@ You can optionally give the arguments a type
{{<highlight sh>}}
def recently-modified [cutoff: string] {
# show all files recursively that were modified after a specified cutoff
# show all files recurisively that were modified after a specified cutoff
# show all files recursively that were modified after a specified cutoff
ls **/* | where modified > (
# create timestamp from input
$cutoff | into datetime

View file

@ -4,7 +4,7 @@ date = 2023-10-28T18:41:37-05:00
draft = false
+++
Ive been messing around with embedded rust recently, using the BBC micro:bit as a learning platform.
I've been messing around with embedded rust recently, using the BBC micro:bit as a learning platform.
Its really cool to see a high level language achieving the same results as low level c.
However, one of my favorite features of rust, the ease of unit testing, is a bit less straightforward to do in cross-compiled, no-std projects.