Miguelmurca's Gemlog//miguelmurca.flounder.online/gemlogmiguelmurcaSlightly Cursed: Running Catgirl (The IRC Client) in Docker2022-01-22T00:00:00Ztag:miguelmurca.flounder.online,2022-01-22:/gemlog/2022-01-22.gmi# Slightly Cursed: Running Catgirl (The IRC Client) in Docker
=> https://git.causal.agency/catgirl/about/ So Catgirl is an excellent minimalist IRC client, developed by June Bug, who also hangs around the small net.
June provided more than enough instructions to compile catgirl and its dependency, LibreTLS. However if, like me, you're not super familiar with automake, it's not obvious how to do a local installation, rather than into /usr/.
As a slightly cursed solution, I decided it was easier to write a Docker file to install Catgirl in a VM than it was risking polluting the /usr/ of my work-provided computer. Here's the Dockerfile, for future reference to myself and others:
```
FROM debian:latest
ENV DEBIAN_FRONTEND=noninteractive
# Install the necessary packages
RUN apt update && \
apt install -q -y \
manpages man-db pkg-config \
ncurses-dev \
git \
libssl-dev \
autoconf \
make automake \
libtool
# Install LibreTLS
WORKDIR /opt/libretls
RUN git clone https://git.causal.agency/libretls . && \
autoreconf -fi && \
./configure && \
make all && \
make install && \
ldconfig
# Install Catgirl IRC
WORKDIR /opt/catgirl
RUN git clone https://git.causal.agency/catgirl . && \
./configure && \
make all && \
make install
# Change into a low permissions user
RUN groupadd -r catgirl && useradd --no-log-init -r -g catgirl catgirl
USER catgirl
WORKDIR /home/catgirl
# Update Catgirl server configuration files
COPY --chown=catgirl config/ /home/catgirl/.config/catgirl/
# Run Catgirl (by default)
ENTRYPOINT ["catgirl"]
```
Placing this file in a folder as follows...
```
catgirl
├ Dockerfile
└ config
└ (catgirl config files)
```
... you can build the image and run it with...
```
docker build --tag="catgirl:latest" (location of folder outlined above)
docker run -ti catgirl (arguments)
```
Now, as expected, june didn't love it when I told them about this on irc.tilde.chat. They did let me know about the following, which would help in a local installation:
* Install location is set by PREFIX, pkg-config tells catgirl where to find dependencies
* The catgirl binary is self-contained, it doesn't have any required runtime files
* The man file is a static file and could also be copied over from somewhere else
* There's an undocumented uninstall make target.
I seem to remember trying to set PREFIX and running into some problems, but I don't remember what any more. Oh well, the Docker solution may be cursed, but it works fine in this situation.TODO in the terminal with git for synchronization2022-01-03T00:00:00Ztag:miguelmurca.flounder.online,2022-01-03:/gemlog/2022-01-03.gmi# TODO in the terminal with git for synchronization
Today I cobbled together this bash script (thanks StackOverflow):
```
#!/usr/bin/env bash
# todo.sh
if ! [[ $0 != $BASH_SOURCE ]]
then
echo "This script should be sourced, not ran."
exit 1
fi
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
todo() {
GIT="git --git-dir=$SCRIPT_DIR/.git --work-tree=$SCRIPT_DIR"
merge() {
# Merge from repository
$GIT fetch
$GIT merge
# Resolve conflicts if any
CONFLICTS=$($GIT ls-files -u | wc -l)
if [ "$CONFLICTS" -gt 0 ] ; then
bash --init-file <(echo ". \"$HOME/.bashrc\"; echo \"There are merge conflicts in the git repository.\nPlease resolve these merge conflicts and exit this subshell when ready.\"; cd \"$SCRIPT_DIR\"; git status")
fi
CONFLICTS=$($GIT ls-files -u | wc -l)
if [ "$CONFLICTS" -gt 0 ] ; then
echo "The merge conflict was not resolved. Aborting"
$GIT merge --abort
exit 1
fi
}
merge
# Determine editor
EDITOR="$(command -v sensible-editor)"
EDITOR=${EDITOR:-${FCEDIT:-${VISUAL:-${EDITOR:-vi}}}}
# Edit ToDo file
"$EDITOR" "$SCRIPT_DIR/todo.txt"
# Commit & push changes, if any
if [[ `$GIT status --porcelain` ]]; then
$GIT add "$SCRIPT_DIR"
$GIT commit -m "todo update: $(date +%s)"
if ! $GIT push
then
# Something went wrong with the push, probably a merge conflict on the
# other end. Merge and resolve conflicts.
merge
$GIT push
fi
fi
}
```
To use it:
* Create an installation folder (mine is $HOME/Installs/todo)
* Initialize it as a git repository (git init, git remote add ...)
* Place this script there
* Add "source $TODO/todo.sh" (where $TODO is the folder where you placed the script) to your ~/.bashrc (or equivalent)
* Restart your shell
Now you can run `todo` from the shell to open a TODO file, whose changes are automatically committed and pushed.
User beware: this is reasonably untested, I may come back to update this.What are some of your favourite small apps?2022-01-02T00:00:00Ztag:miguelmurca.flounder.online,2022-01-02:/gemlog/2022-01-02.gmi# What are some of your favourite small apps?
One of my recent past-times is making small command-line applications in Python to address some small task I think can be automated.
=> https://github.com/mikeevmm (you can find some of them pinned here.)
On the other hand, lately I've been loving to use scli, a terminal Signal client.
Every so often I find such relatively little known but super cool terminal app projects on github. But, it seems like I only find these when I go looking for something very specific.
So, I'm looking for recommendations: npm or pip executables that are relatively simple in their functionality, but fill some gap usefully.
I can start:
=> https://github.com/charmbracelet/glow glow is great for markdown in the terminal. Democratic Computing2022-01-01T00:00:00Ztag:miguelmurca.flounder.online,2022-01-01:/gemlog/2022-01-01.gmi# Democratic Computing
I was thinking about setting up a shared tilde with friends, but then I thought of something: I wouldn't be comfortable "owning" the server in which my friends participate, but I know for a fact that they probably don't have the technical expertise to manage it.
Ideally, there would be a democratic system set up, where I had root privileges but other users could vote such that a 2/3rds majority vote could boot me as a root user (and place another user in that place).
The tricky part is that even as a root user, the "president" could not have control over this system, or an elected manager could, well, impose a dictatorship.
Are there instances of democratic computer management? Sometimes I think about how, despite *vague gesture* democracy, most of our computer models are fairly (very, even) feudal.
An OVHCloud VPS is 3.50$/month. Maybe a fun experiment would be trying to create such a democratic tilde?
(please don't talk to me about the blockchain)[My portfolio website revamp is complete. It uses o...]2021-12-29T00:00:00Ztag:miguelmurca.flounder.online,2021-12-29:/gemlog/2021-12-29.gmiMy portfolio website revamp is complete. It uses only HTML and CSS, but I think it looks pretty good nonetheless.
=> https://meegleeto.com meegleeto.com
It's a little funny how my clearnet page is professional and my gemini capsule is personal. Something to do with the kind of encouraged approach, I think.
I'm proud of the design.
I don't really need a portfolio page though. I'm just a lousy student. TAU-800B Super2021-12-16T00:00:00Ztag:miguelmurca.flounder.online,2021-12-16:/gemlog/2021-12-16.gmi# TAU-800B Super
=> https://twitter.com/mikeevmm/status/1471194265376100355 (Adapted from the corresponding Twitter thread, which has a lot more visual aid.)
Last week, me and a friend (André Duarte) presented an art instalation for the processing Community Day Coimbra 2021.
It's titled TAU-800B.
We put a ton of thought and ideas into it, so I'll be going over them here!
=> https://pbs.twimg.com/ext_tw_video_thumb/1471163237664272390/pu/img/1kmqk4NAGy5oR_Tv.jpg [Picture of the instalation.]
But first, a TL;DR:
> TAU-800B approaches the theme of Anachronism in many ways, some literal and some figurative. It is a full emulation of a time traveling computer, that despite having that ability and sporting a retrofuturistic interface, obeys computing paradigms of the 70s.
> These strange internals (6 bit words, 6 word stack, assembly only) are fully exposed for the viewer that takes a closer look. Keeping with the 70s theme, there's also a complete reference manual next to the screen, providing further hints on the nature of the artifact.
=> https://www.yewtu.be/watch?v=uQgSkM6u0AM (If you'd rather hear me talk much too fast, me and André gave a 10 minute talk on the piece as part of the exhibition; 2h mark.)
(we had to rush a little though, so I'm hoping to get into more detail here.)
So, the theme for the call was "Anachronism". That's a term that refers to inconsistencies related to time, like a knight welding a gun, or European Middle-Ages peasants eating potatoes. Think steampunk.
As a word, Anachronism has Greek roots, "ana" + "chronos", or "against" + "time".
We knew from the start that we wanted to explore this meaning figuratively and literally.
On the other hand, Processing is all about computational creativity and integrating code in your art.
Computers against time.
Computers disobeying time.
Time-traveling computers?
This drove us early to the core concept of TAU-800B: a time-travel assisted computer, that would mix old (like, minicomputer old) architecture and I/O with a futuristic interface.
This concept had some nice features:
1. It allowed us to explore both literal and figurative senses of the theme, by incorporating both time-traveling and tech asthetics from multiple eras
2. It gave us a free pass to get some things wrong/too modern; it's an anachronistic artifact after all!
3. It gave a lot of space to create a narrative and an interaction with the audience, in the presentation of the machine to the audience.
4. Lots of cool programming!
We began, then, attacking this concept in multiple fronts: we designed a mockup for the interface, and began writing a reference for the machine's assembly, bringing in ideas for the latter from all of x86, 6502 and Z80A!
=> https://twitter.com/mikeevmm/status/1471173440619040770 You can see also here the first mockup for the interface versus the final one. The core elements (shown state and size details) never changed!
(We featured a clock in the center from the start, given the time theme, and as a way to later do meaningful I/O.)
Another running theme, visual-wise, was the japonese retro-future asthetic. I recently watched Neon Genesis Evangelion, and if there's one thing EVA nails, it's the computer interfaces. On the other hand, it was a further nod to the theme, by using the 90s standard of futuristic rather than a more modern one.
But what does "time-traveling assisted computer" even mean, I hear you ask? (maybe. humor me.)
Well, both me and André are physicists working in computer science, and it turns out that this is a sufficiently serious question that smart people have thought about it.
In particular, consider the Novikov Self-Consistency Principle. It's basically the stance that you have to consider time as a whole, so that paradoxes are impossible, because things have to make sense, well, as a whole.
(Yes, it's the timey-wimey thing.)
This turns out to have some interesting consequences for computation:
Consider a computer equiped with a kill-switch, and a time machine. Now give it some really hard problem, and instructions to do the following: first, to guess an answer to the problem. Then, if it gets it wrong, to go back a moment in time and activate the kill-switch.
Because getting it wrong would induce a paradox, then, physically, it must guess correctly! Congratulations, your machine now has infinite computing power.
Our particular take on this sort of time travel ability is that the TAU can write values to its registers at past and future moments.
Of course, seeing that we wanted to emulate the machine, this could've posed some problems. That's where the 70s limitations we imposed came in handy.
As you might know from studying dynamical systems, if you have some transformation (say, from a machine state at a given moment to the state in the next), that transformation might have what's called a "fixed point": a state that's essentially not affected by the transformation.
In particular, a consistent universe (i.e. sequence of machine states) must be a fixed point of the program that the machine is running.
Because the RAM was so small, we could actually keep track of its state throughout a large window of time (say, 500 instruction steps). So, when we found a temporal addressing, we tried to converge towards a fixed point by repeatedly simulating the universe.
=> https://pbs.twimg.com/media/FGqogFjXMAUwvrg.png Yes, this is confusing. Here's a picture, if it helps.
(Note also that this scheme isn't perfect, there's a lot of ways in which it can fail: unstable fixed points, interweaving time addressing... But shh, we're the ones writing the programs to be ran!)
(A way this *does* work, however, is in fast-tracking Newton's method 👀)
You need to carry out the above process really fast, so that it looks effectively instantaneous. A convenient segway to discuss the tech we used!
The actual display is driven by electron, because web is super portable, and we weren't sure how the installation would be displayed (aspect-ratio wise, etc.)
However, like I've just said, we needed to be fast, so the whole backend (parsing, emulation) was written in Rust!
We wanted as little computation to be done on the web side as possible (to save time), so all Electron does is poll for state updates, and show the current internal state of the machine.
Here's the diagramme we were using during development:
```
Backend
┌─────┐
│Start│
└──┬──┘
┌──────────┴────────────┐
┌─────┤ Threaded poll response├──┐
│ │ and processing │ │
Frontend │ └───────────────────────┘ │
│ │
┌─────┐ │ ┌────────┴───────┐
│Start│ │ │Begin processing├───┐
└──┬──┘ │ │ bytecode │ │
│ │ └────────┬───────┘ │
│ │ │ │
│ │ │ │
┌────┴─────┐ ┌───┴────────┐ ┌───────┴──────┐ │
┌──┤State Poll├──────────────► │Request poll│ ┌───────┤Pending query?│ │
│ └────┬─────┘ │ │ │ └───────┬──────┘ │
│ │ │ . │◄──────┘ │ │
┌────┴─────────┐ response │ . │ │ │
▲ │Await Response│◄───────────┤ . │ └───────────┘
│ └────┬─────────┘ └────────────┘
│ │
│ ┌─────┴──────────────────┐
└─┤Update visual state │
│to reflect current state│
└────────────────────────┘
```
=> https://neon-bindings.com (To inferface Rust and JS we used the very good package "Neon".)
Finally, let me talk briefly about the manual.
The most obvious thing about it is that it's on a corkboard (I stole it from my girlfriend). The second most obvious thing is that one of the pages has coffee spilled on it (I stole that coffee from lab mates that don't empty the coffee machine).
This was a really fun part of the project, because we wanted to make the guide not just a reference to the assembly language of the machine, but also something with a lot of personality, the same way PDP-11 manuals are super endearing.
(as Adam Saltsman would put it, we wanted the manual to be good trash)
Furthermore, there's this underlying narrative we gradually constructed that TAU-800B is a heavily used machine in a lab setting.
To achieve that professional/corporate look, André and I went with LaTeX. However, the printed pages looked way too clean, so we beat them around, scribbled on them, and even spilled coffee on them.
Basically, what you'd expect from a very handled work paper.
(incidentally, my prints of physics papers may not look too different.)
This was also a super fun opportunity to weave modern terminology into what is supposed to be an old reference manual. Not to mention the time travel references!
I'm particularly proud of mentions to "Carnot-Landauer-Novikov principle" in the Energetics section.
(Link to a PDF of the manual at the end.)
The presentation wrapped all of these elements by showing TAU-800B at work, running some assembly that can be checked out pinned on the corkboard. (and on the screen while running!)
As a last nod to the theme, what it's doing is displaying the current time, but backwards.
(it then takes the square root of the hours and minutes, just to show off the time travel speed ups. Big thanks to André for his 6-bit assembly calculation magic!)
(we stayed true to the emulation when displaying the time: you have to write the correct bytes to a reserved position in memory, in order to control a virtual 7-segment display 😃)
That's it!
We'd love for TAU-800B to be exhibited again, so if you have an art-showing space and are interested (or know someone who is/does), let me know! You can also send me your thoughts about it via email :)
=> https://drive.google.com/file/d/1e0hbB9f3arMnm-1_2CtwJ8l_YnwZI_lq/view?usp=sharing Manual PDFNobeamer2021-11-19T00:00:00Ztag:miguelmurca.flounder.online,2021-11-19:/gemlog/2021-11-19.gmi# Nobeamer
=> https://github.com/mikeevmm/nobeamer (nobeamer)
I did a presentation this week on how TeX and LaTeX are a lot like the C/C++ problem, with the additional complication that TeX does not have a Context-Free Grammar. For this, I read up on (some of) The TeXBook, and ended up writing/learning quite a lot about Plain/Knuth TeX. In particular, I was very satisfied with the philosophy of TeX, and its minimalism; I love that an empty program is
```
\bye
```
I'd echoed these feelings before; some years ago, when I understood less of TeX than I did now, I nonetheless wrote this mishmash of RegEx and Python I called "TeXDown," and one of the things it boasted was that an empty document would result in... well, an empty document.
=> https://github.com/mikeevmm/texdown (TeXDown)
As part of this presentation on TeX/LaTeX, I thought it would be fun to write the presentation in Plain TeX itself, and, with a bit of macro definition, it turned out that this was not at all impractical. In particular, the bare control over the glue space was very convenient to get nicely spaced stuff.
So, I collected these macros into a "package." Here's what a presentation looks like with nobeamer:
```
\input nobeamer.tex
\let\header\big
\let\subheader\large
\newslide
\topfill
{\center %
\header My presentation
\subheader (with subtitle)}
\newslide
\topfill
{\center %
Thank you for coming to my talk.
}
\bye
```
I would still use PowerPoint with the Computer Modern font if given the chance, but you could genuinely do worse than nobeamer if you're in Linux.
You can find the TeX file in the Github repository linked above; linked again below for convenience.
=> https://github.com/mikeevmm/nobeamer nobeamer
(You can email me your thoughts at miguelmurca+flounder﹫gmail⋅com)[Today, the crisp night air, and the smell of dista...]2021-11-13T00:00:00Ztag:miguelmurca.flounder.online,2021-11-13:/gemlog/2021-11-13.gmiToday, the crisp night air, and the smell of distant smoke and mist. Unpretentiously, just that, the cars far off at one in the morning. And yet not deserted, an air of invitation and not too cold. How brilliant of a goodbye letter, and silly something else: the cool air and the whole, unstructured, as self-explanatory. I saw two cats at a window today.2021-11-11T00:00:00Ztag:miguelmurca.flounder.online,2021-11-11:/gemlog/2021-11-11.gmi# I saw two cats at a window today.
A propos of something, someone talked about the definition of labor versus work.
> Labor is breaking rocks with a pick-ax while someone holds a gun to your head. Labor is your tenth hour in the warehouse when your vertebrae are starting to feel crunchy. Work can be challenging, surprising, effortful, emotional, but it isn’t something that requires intense coercion, although may require a little coercion, sometimes, in the form of a deadline or a structured work environment of some kind. Most tellingly, it feels rewarding, in a deeper way than the “thank fuck that’s over” swell of relief you feel at the end of a day of laboring. It feels like you did something on earth.
=> https://sashachapin.substack.com/p/if-you-have-writers-block-maybe-you (If You Have Writer's Block, Maybe You Should Stop Lying)
A propos of something else, someone talked about not having to be anymore.
> I think a rarely discussed but commonly felt desire is to simply not exist. This should of course be distinguished from its far more violent and permanent cousin, an act which, admittedly, still holds tightly to this longing for nothingness, but the former sensation is far more vague: to not have to experience this any more. [...] It makes little sense, one moment of freedom for an eternity of nothing, of permanent silence. Is this a worthy sacrifice? I think not. [...]
=> https://www.youtube.com/watch?v=3mb3A3uCWjk (the desire to simply not exist)
I can't imagine work. Isn't it all labor? Work must become labor, the satisfaction fades away because work must be continued, and skills sharpened, everything marches to irrelevance except if you are this generation's Gauss or Mozart or Leonardo da Vinci or von Neumann. But you aren't, because you'd know by now, and the question wouldn't be posed in the first place.
> [...] It's not that hard. [...]
=> https://www.youtube.com/watch?v=-9S0R27YLKC (This 12-Year-Old Has Taken the Art World by Storm)
And yet you are supposed to derive enjoyment from within you. And maybe pretend you're a hedonist: why not seek well-being?
The labor of existing, should it be work?
Maybe for a different baseline. But the negation of work and labor and all baselines, can you imagine?
I saw two cats at a window today. One was black, the other brown. I usually take it as auspicious. I dislike the word "carnism", because of the scientific connotations.2021-11-04T00:00:00Ztag:miguelmurca.flounder.online,2021-11-04:/gemlog/2021-11-04I-dislike-the-word-carnism.gmi# I dislike the word "carnism", because of the scientific connotations.
Oh no! An opinion on vegan/vegetarianism online? How irresponsible! I know, which is why I want to preface this with the following information: while I am not vegan nor vegetarian, I respect these positions and their moral underpinnings. Personally, I'd rather just eat less meat, and start looking at eating more insect-based food, but I understand the importance of radical positions in changing the status quo.
=> gemini.circumlunar.space/users/solderpunk/gemlog/vodka_and_cigarettes_sustainability.gmi Maybe it's just a good old case of Solderpunk's "Vodka and Cigarettes Sustainability".
Regardless, please bear in mind I'm not trying to discredit vegan or vegetarianism here, I just very specifically have a problem the word "carnism". I'm also just putting this in writing so that I can structure my thoughts and point towards this later.
Otherwise, basically what it says in the can.
The thing is, even just judging from the Wikipedia article on the topic, the term "carnism" is overloaded with moral positions, while at the same time presenting itself as a scientifically sound concept. Which it is! Or can be. Because, yes, there is something to be said about an invisible belief system that allows us to have pets but also eat animals and not be very upset about it:
> We don't see meat eating as we do vegetarianism – as a choice, based on a set of assumptions about animals, our world, and ourselves. Rather, we see it as a given, the "natural" thing to do, the way things have always been and the way things will always be. We eat animals without thinking about what we are doing and why, because the belief system that underlies this behavior is invisible. This invisible belief system is what I call carnism.
=> https://en.wikipedia.org/wiki/Carnism Wikipedia:Carnism
But then this is turned upside down, whereby this concept is used as a basis to argue that it is objectively true that people who do not adhere to vegetarianism are psychologically flawed, and so that vegetarianism is objectively superior! See, for example, the opening line of the same article:
> Carnism is a concept used in discussions of humanity's relation to other animals, defined as a prevailing ideology in which people support the use and consumption of animal products, especially meat. Carnism is presented as a dominant belief system supported by a variety of defense mechanisms and mostly unchallenged assumptions.
By phrasing the previous definition on the basis of "defence mechanisms" and "unchallenged assumptions", the connotations associated with having this set of beliefs that underpin your consumption of meat are now negative. And if you think I'm being uncharitable, here's some other quotes of the Wikipedia article on the positions of Melanie Joy (who coined the term) on carnism:
> Joy compares carnism to patriarchy, arguing that both are dominant normative ideologies that go unrecognized because of their ubiquity
> [M. Joy] argues that the "Three Ns" ["normal, natural, and necessary"] have been invoked to justify other ideologies, including slavery and denying women the right to vote, and are widely recognized as problematic only after the ideology they support has been dismantled.
It's fairly clear that you could do this for any moral position involving an "us versus them" situation: you designate the inherited belief system of the other party by some scientific-like term, and then you load negative connotation onto this term, giving an illusion of scientific backing of your position.
So, in summary: unless you are using the word "carnism" to refer to the existence of some hidden belief system underlying the consumption of meat (not "inherited", not "compelling", just hidden), it is my opinion that the word can only be otherwise used in bad faith, because it disguises a moral position as scientific fact.
=> https://i.imgur.com/mSHi8.jpg Or, do what you want, I'm not a cop.
=> https://www.youtube.com/watch?v=LhjMOtaBqVc (You know what's silly? This was all prompted by the (very good) Randy Feltface "Randy Writes a Novel" show, which I recommend watching.)
=> //miguelmurca.flounder.online 🔙Decentralizing Trust2021-08-29T00:00:00Ztag:miguelmurca.flounder.online,2021-08-29:/gemlog/2021-08-29DNS-on-the-Blockchain.gmi# Decentralizing Trust
=> //blog.locrian.zone/tld-r Thinking about this post by Locrian.
Or, rather, this post by Locrian intersects some thoughts I've had on internet, decentralization, and the physical world.
Although the internet is immaterial [in a way], if you think hard enough about decentralizing it you end up at odds with the fact that there is a physical structure at the base of it all, which must be maintained.
=> https://what-if.xkcd.com/23/#:~:text=Max%20L [see Max L's question]
The Internet (the immaterial parts of it) are able to be 𝘵𝘳𝘶𝘭𝘺 decentralized (viz. cryptography). The physical world, not at all. And so people online notice every so often 𝘤𝘰𝘶𝘨𝘩𝘤𝘳𝘺𝘱𝘵𝘰𝘤𝘰𝘪𝘯𝘴𝘤𝘰𝘶𝘨𝘩 that it's not such a bad idea to have some degree of centralization on trusted (IRL) entities, because the Internet leaks into the real world.
On the other hand this means that I can't connect to the internet unless my ISP agrees to let me do so (and keeps the cables intact), and that you and me can't really propose new TLDs.
I feel like no matter the advances in cryptography and communication tech (LoRA), it will never be possible to get rid of these fundamental centralizations, which are essentially anchors for trust in the real world.
(I imagine your thoughts on this are different if you are an anarchist.)
=> https://medium.com/@paulfrazee/the-anti-parler-principles-for-decentralized-social-networking-80a490909b38 Instead, the idea that you can "decentralize" these fundamental powers by making multiple, 𝗮𝘂𝗱𝗶𝘁𝗮𝗯𝗹𝗲 entities that keep each other in check (something that Paul Frazee kind of clumsily discusses in this article) is much more attractive to me.
Regardless of the form, decentralization makes (IRL) power structures more resilient to power concentration, which is obviously something we should/must strive for, but I do think that you won't be able to solve distrust with a blockchain.
=> //miguelmurca.flounder.online 🔙Testing whether fleet can suggest gemlogs2021-08-23T00:00:00Ztag:miguelmurca.flounder.online,2021-08-23:/gemlog/2021-08-23Testing-whether-fleet-can-suggest-gemlogs.gmi# Testing whether fleet can suggest gemlogs
Ok, now I'm testing the ability of fleet (that's what I call my flounder tweeting python script --- damn Twitter and their preemptive moves) to detect that I'm writing --- or rather, have written --- a long post and suggest that it is instead posted to my gemlog, with a fancy prompted title and a back button at the end and everything. Of course, the problem with testing such a thing is that I have to write some long text (fleet only prompts for a gemlog if the length of the text is above the size of a tweet), but I think that this should be enough. I'm saving it to my clipboard though, just in case.
=> gemini://miguelmurca.flounder.online 🔙Something about game theory (I think?)2021-08-22T00:00:00Ztag:miguelmurca.flounder.online,2021-08-22:/gemlog/2021-08-22somethingaboutgametheory.gmi# Something about game theory (I think?)
=> microblog.gmi Taken from my microblog, because it's slightly (but only slightly) long form.
I was in the beach the other day when something occurred to me.
In middle school, when no-one had a coin handy and you needed to do some coin-toss decision, people would «throw Even Odd». The protocol is as follows: two parties (usually not collaborating) pick "Even" or "Odd". Then, on the count of three, the two parties will "throw" a number (rock-paper-scissors style), the winner being whoever guessed the parity of the sum correctly.
(A moment to parse that last sentence.)
It's pretty intuitive* that no matter how you play you don't have much control over the outcome, unless you know exactly how the other player will act. As long as the two players are adversarial, it might as well be random.
*Read: I don't know the math to prove this. I know just enough about Game Theory to know that this is about Game Theory, of which I know nothing of.
So what if you were missing a die, rather than a coin? Could you use a similar protocol to throw some number, rather than a binary yes or no?
As far as I can tell, yes, by having two players throwing a number (between 0 and N --- say, 0 to 4), and taking the sum modulo N.
For one, the distribution of the possible sums (modulo N) is always balanced. Or, according to this Python script, this holds for N in 2...100, and that's a lot of fingers. So you're not more likely to get some outcomes, overall.
On the other hand, I don't think any of the players has a way to bias the outcome; this to me is clearer if you think of the game as one of the players picking the value and the other one offsetting it (with wrap-around). Whatever one player picks, the other one can offset the value to wherever they want.
Likewise, if one of the player has a particular bias, the other player can shift that bias wherever; of course the first player can then shift *their* bias and so on, and so the game doesn't have an equilibrium.
=> https://www.mentalfloss.com/article/77260/surprising-psychology-behind-rock-paper-scissors Of course, psychological factors might come into play; I'd bet that people wanting a low value would intuitively throw a low number as well.
I'd like to see how a formal proof of this goes; if you know game theory and are so inclined, shoot me an email! (You can find my email on the home page.)
> 🏞
That was the original post. Since then I've Googled for whether (a+b mod N) is uniformly distributed for any N, if a and b are uniformly distributed between 0 and N, and the answer is yes.
=> https://math.stackexchange.com/questions/1683202/sum-modulo-of-two-random-variables-with-one-uniformly-distributed The proof is simple and I feel I ought to have got there. I reproduce it below:
(Let's see how well I can write math with unicode...)
```
A ~ UniformInteger[0, N)
B ~ UniformInteger[0, N)
Pr[A + B = c] = Σ(b=0; N-1; Pr[A = c - b ⋀ B = b]) = # c - b is understood to be mod N (such that -1 → N-1)
= Σ(b=0; N-1; Pr[A = c - b].Pr[B = b]) = # Both players make their choices independently
= Σ(b=0; N-1; 1/N . Pr[B = b]) =
= 1/N . Σ(b=0; N-1; Pr[B = b]) = # Total probability is 1...
= 1/N
```Making a Telegram Bot2021-05-18T00:00:00Ztag:miguelmurca.flounder.online,2021-05-18:/gemlog/2021-05-18bot.gmi# Making a Telegram Bot
I love bots that say weird stuff. From @horse_ebooks [1], to @dril_gpt2 [2], to @genderoftheday [3], there’s a lot of good ones. So, naturally, I got around to making my own. However, unlike the former two accounts, I didn’t have a wealth of source material to train (say) a GPT model on — and I wanted to have a bit more control over the output of my bot; something more in the style of @genderoftheday. For building a Telegram bot, and with the help of a couple NodeJS libraries, this turned out to be fairly simple.
=> ../assets/bot.png Here’s what we’ll be working on.
In this post I’ll guide you through setting up and building your own Telegram bot.
=> https://twitter.com/horse_ebooks [1] @horse_ebooks
=> https://twitter.com/dril_gpt2 [2] @dril_gpt2
=> https://twitter.com/genderoftheday [3] @genderoftheday
### How Telegram Bots Work
Before we get into text generation, it’s worth talking a bit about how Telegram bots work.
First, you must register/create your bot with Telegram’s servers. The process for this is fairly unique: you do it by talking with Telegram’s own bot, the @BotFather [5] (a Godfather pun I’m not yet sure I love or hate). This process is fairly straightforward, and you should be able to navigate it just by talking with the bot and following the presented prompts. At this stage you can set up your bot’s username, name, profile picture, description and “about” section. Come back when you’re finished.
All set? Have you saved your BotFather-provided bot token? OK, now things get a little more tricky, because you can’t edit a bot’s behaviour directly on Telegram’s servers (where your newly created bot is now hosted). Instead, information flows as follows:
* Your bot gets a message, or more generally, something happens (a message is edited, someone joins a group it is in, etc.)
* Telegram’s servers send an HTTP message containing this information to some address you’ve previously set up (the endpoint, or web-hook)
* The server listening on that address receives and processes that information accordingly, and
* Using the bot token, sends an HTTP message back to Telegram’s servers asking for the bot to perform some appropriate reaction.
=> https://t.me/botfather [5] @BotFather
This means we need to set up an address to receive these events, and a server to do the processing and responding. For this, we’ll be using Amazon Web Services (AWS).
(In the very off-chance case you’re not yet familiar with AWS, it’s an umbrella name for the many web-related services Amazon offers. They have a generous free tier that should be more than enough to host your Telegram bot. The downside is you’ll have to provide a credit card number to access this tier; use a temporary and/or prepaid card for ease of mind. Also: try not to be alarmed by the look of the AWS dashboard — AWS puts a lot of things in your face at once, and we won’t be really using most of them.)
To keep the costs down, we don’t want our code (and servers) to run all the time; after all, we only need to do something whenever our web-hook is called upon from Telegram’s servers. So, instead, we’ll be using a serverless approach. Don’t be fooled by the name, there’s definitely still a server involved [6], it’s just that you don’t manage it, and (as far as you care) it’s only running when your code is running — upon some event (which we’ll want to be the web-hook getting a message), some magic wake-up-the-server setup happens, your code is ran, and the server shuts off again. This means you only pay for the time your code is actually doing something.
We’ll also need an actual HTTP interface to bridge our serverless code with the incoming HTTP request.
=> https://xkcd.com/908/ [6] definitely still a server involved
For the serverless code, we’ll use AWS’s Lambda [7], and for the HTTP interface, we’ll use API Gateway [8].
=> https://aws.amazon.com/lambda/ [7] Lambda
=> https://aws.amazon.com/api-gateway/ [8] API Gateway
### Actually Getting Started
At this point, it gets easier to explain stuff as we go.
We’ll code our bot using NodeJS [9], so we’ll start by initializing a new project. Create a new my_bot directory, and (in that directory) initialize a new project with
```bash
npm init # Follow the prompt to setup the details of your bot
```
(You’ll need to install NodeJS and npm if you don’t have them already.) [10]
=> https://nodejs.dev/ [9] NodeJS
=> https://docs.npmjs.com/downloading-and-installing-node-js-and-npm [10] NodeJS/npm installation guide
Now, instead of dealing with AWS’s dashboard, we can automate the setup and deployment of our code using the npm package serverless [11] (which has a very confusing name. I’ll refer to it as “npm-serverless” to avoid confusion).
We can install that with
```bash
npm install --global serverless
```
and use serverless (or sls) to call upon the npm-serverless command line interface. You’ll need to set it up with your AWS account:
```bash
sls config credentials --provider aws --key <key> --secret <secret>
```
(You can get the necessary key and secret in the “My Security Credentials” page of your AWS Dashboard, under the “Access Key” tab. [12])
=> https://www.serverless.com/ [11] serverless
=> https://console.aws.amazon.com/iam/home?#/security_credentials [12] “My Security Credentials” page of your AWS Dashboard, under the “Access Key” tab.
Now, let’s initialize npm-serverless in our project…
```bash
sls create --template aws-nodejs
```
… where we’ve specified we’ll be using AWS (and that it’s a NodeJS project). You should now have a serverless.yml YAML [13] file in your project directory. Opening it up, you’ll find that a lot of boilerplate has been created for you, most of it commented out. We’re only interested in some of the fields, so erase and/or modify the commented content until it looks like the following (I’ve added some comments myself):
```yaml
service: mybot
frameworkVersion: '2' # If this looks different, leave it as is!
provider:
name: aws
runtime: nodejs12.x # Also leave as is
lambdaHashingVersion: 20201221 # And also leave as is
functions:
telegram: # Previously "hello"
handler: handler.webhook # Previously handler.hello
events:
- http:
path: webhook
method: post
cors: true
```
Under the functions field, we’ve specified that we’ll have a serverless Lambda service called telegram; specifically, we’re saying that when it’s called, the function webhook of the file handler.js should be ran. events specifies we’ll have an HTTP endpoint associated to this function; it’ll accept POST messages [14] at <url of our endpoint>/webhook.
Let’s write the code to be ran. Open up handler.js (which npm-serverless should have created for you), and let’s simplify the generated boilerplate code:
```javascript
async function webhook(event) {
// Called when our web-hook receives a message
// The function receives an `event` argument, which contains the
// incoming HTTP message information.
// We'll do nothing for now.
return {statusCode: 200};
}
module.exports = {webhook};
```
=> https://en.wikipedia.org/wiki/YAML [13] YAML
=> https://en.wikipedia.org/wiki/POST_%28HTTP%29 [14] POST messages
(Note how we’re exporting the function to be called. [15])
Returning an HTTP 200 status code [16] will let the Telegram server (or whoever hit the endpoint) know we received their message alright and have done whatever processing we need to do.
(If you fail to perform this acknowledgment, Telegram will keep re-sending the events for a while, which can result in unexpected repeated calls to your function and more server time, so make sure to return an OK status code.)
We can now deploy (upload to the cloud) our service; npm-serverless will take care of setting up Lambda and API-Gateway for us:
```bash
sls deploy
```
=> https://www.sitepoint.com/understanding-module-exports-exports-node-js/ [15] NodeJS module exports
=> https://en.wikipedia.org/wiki/List_of_HTTP_status_codes [16] HTTP 200 status code
If everything was set up correctly, after a little while you should get a message confirming that the service was deployed and related details. Of these, we’ll need the endpoint address, listed as POST - <link>, so we can point the Telegram servers to that address.
```
Serverless: Packaging service...
...
Service Information
service: mybot
stage: dev
stack: mybot-dev
resources: 12
api keys:
None
endpoints:
POST - https://<...link...>/dev/webhook
functions:
telegram: mybot-dev-telegram
layers:
None
```
Configuring the bot’s endpoint is done the same way as making it perform any other action: by sending an HTTP POST message to the address [17] https://api.telegram.org/bot<TOKEN>/<SERVICE>, where you should replace <TOKEN> by your BotFather-provided token (notice the leading “bot”), and <SERVICE> according to what you want to do. (To set the endpoint, that’s setWebhook.)
You can do this with curl if you’re on Linux (handy one-liner below*), but in the spirit of making things clear and cross-platform, we can quickly write some Javascript to do it.
```bash
curl --request POST --url https://api.telegram.org/bot<TOKEN>/setWebhook --header 'content-type: application/json' --data '{"url": "<ENDPOINT URL>", "allowed_updates": ["message"], "drop_pending_updates": true}'
```
=> https://core.telegram.org/bots/api#making-requests [17] Telegram HTTP Requests
To make an HTTP request from NodeJS we’ll bring in bent [18], a very nice lightweight requests library:
```bash
npm install bent
```
=> https://github.com/mikeal/bent [18] bent
And now we can create setWebhook.js …
```javascript
// Import bent
const bent = require('bent');
// Load our Telegram bot key and AWS endpoint URL from the environment variables.
// You could write them out explicitly (as strings) here, but that's dangerous!
// This way, there's no problem if your code ever becomes public
// (for example, you host it on a Github repository).
// Otherwise, this would leak your keys, and allow anyone to control your bot.
const TELEGRAM_KEY = process.env.TELEGRAM_KEY;
const ENDPOINT_URL = process.env.ENDPOINT_URL;
// Create a callable object that will POST JSON to the bot URL, and expect a
// 200 OK status code.
const poster = bent(`https://api.telegram.org/bot${TELEGRAM_KEY}/`, 'POST', 'json', 200);
// The message we will be POSTing to the URL; the field names should speak for themselves,
// but you can find their description, as well as other allowed fields at
// https://core.telegram.org/bots/api#setwebhook
const post_message = {
"url": ENDPOINT_URL,
"allowed_updates": ["message"],
"drop_pending_updates": true,
};
// bent is asynchronous, so we wrap it in an async function
async function setIt() {
const response = await poster('setWebhook', post_message);
// Let's log the response, just to get some visual feedback.
console.log(response);
}
// Run the asynchronous code.
setIt();
```
… and run it.
```bash
TELEGRAM_KEY='<your bot key>' ENDPOINT_URL='<your endpoint url>' node setWebhook.js
```
You should get a response confirming your web-hook was successfully set.
```json
{ ok: true, result: true, description: 'Webhook was set' }
```
Note: By default, setWebhook.js will be packaged and uploaded to S3 (AWS’s storage service) alongside the rest of the code as part of your serverless setup. This isn’t a big problem, but it’s not really needed. If you want to keep things tidy and save some cloud space, you can exclude setWebhook.js from your serverless service by adding
```yaml
package:
patterns:
- '!setWebhook.js'
```
to your serverless.yml file.
### Reading and Sending Messages
Your code is now ran every time your bot gets a message, but it’s not doing anything! Thankfully, the process for making the bot reply is very similar for what we did above to set the web-hook. (If you used that curl one-liner, you might want to go back a couple paragraphs.)
First, let’s go back to handler.js and parse the incoming messages into something we can process. If we refer to the Telegram bot API documentation [19], we’ll find that for message events, the body of the HTTP request will contain a message field, itself an object with more information about the message [20]. (We’ve set up our web-hook to only receive message events, so we don’t need to worry about other events.)
```javascript
async function webhook(event) {
// Parse the body of the incoming POST message as JSON
const body = JSON.parse(event.body);
// Get the `message` field of the incoming update. Because
// we've only subscribed to message events, we are guaranteed
// that the body will always have this field.
const message_object = body.message;
// From Telegram's API docs* we can find that the `text` field
// will always be present, and contain the text of the incoming
// message.
// * https://core.telegram.org/bots/api#message
const text = message_object.text;
// TODO: We want the bot to echo this message, but how?
return {statusCode: 200};
}
module.exports = {webhook};
```
Now all we’re missing is some code to make the bot reply. Telegram’s bot API docs [21] tell us the endpoint for doing this is sendMessage, i.e., we want to send a POST message to https://api.telegram.org/bot<TOKEN>/sendMessage.
=> https://core.telegram.org/bots/api#update [19] Telegram bot API documentation
=> https://core.telegram.org/bots/api#message [20] itself an object with more information about the message
=> https://core.telegram.org/bots/api#sendmessage [21] Telegram’s bot API docs
Let’s once again use bent for that (npm install bent if you haven’t already):
```javascript
const bent = require('bent');
// We set up a function to POST to the Telegram API the same way
// as before; we'll use the `telegram` function to send POST requests
// to various endpoints.
const TELEGRAM_KEY = process.env.TELEGRAM_KEY;
const telegram = bent(`https://api.telegram.org/bot${TELEGRAM_KEY}/`, 'POST', 'json', 200);
// We make the bot send a message by POSTing a well-formed object
// to the `sendMessage` endpoint of the bot API. This object must
// always contain a `chat_id`, indicating where the message goes,
// and a `text` string field, which is the actual content of the
// message.
// Let's wrap all of this in a function.
async function sendBotMessage(chat_id, text) {
const options = {chat_id: chat_id, text: text};
return telegram('sendMessage', options);
}
async function webhook(event) {
const body = JSON.parse(event.body);
const message_object = body.message;
const text = message_object.text;
// Echoing the incoming message is now easy, but we'll
// need to get the correct `chat_id`:
const chat_id = message_object.chat.id;
// Send it back!
await sendBotMessage(chat_id, text);
return {statusCode: 200};
}
module.exports = {webhook};
```
You’ll notice that we need a chat_id to indicate where we’re sending our message to; because we want to reply to the incoming message, we grab that identifier from message_object.chat.id.
Before we upload this to the cloud (and bask in glorious bot echoage), there’s something we need to fix: we’re once again grabbing the Telegram bot key from the environment variables, so that sharing the code doesn’t mean sharing access to the bot, but now this will be running in the cloud, so we can’t just set our variables in the command line.
There are two solutions to this:
* Set the environment variables manually in the AWS Lambda dashboard;
* Let npm-serverless set it up.
We’ll be going with number 2, because it’s easier and makes your project more self-contained (as in, sls deploy sets everything up in one go). (If you really want to use the AWS Lambda dashboard, and/or want to confirm that variables were set, look under the “Configuration > Environment variables” tab.)
In serverless.yml, under the provider field, we can add an environment entry. Any sub-entries (of the form name: value) will be set up as environment variables by npm-serverless.
But! adding an entry with the token to serverless.yml defeats the purpose of grabbing the token from the environment variables in the first place! If we want to share our bot setup, we’ll have to share serverless.yml as well.
The solution to this is creating another YAML file (which we’ll never share with anyone) containing these secret values.
Then we can load those into our serverless.yml with ${file} [22]:
```yaml
...
provider:
...
environment:
TELEGRAM_KEY: ${file(./secrets.yml):TELEGRAM_KEY}
```
=> https://www.serverless.com/framework/docs/providers/aws/guide/variables/#reference-variables-in-other-files/ [22] ${file}
That should do it! Run sls deploy to deploy your new setup, and message your bot on Telegram. It should reply with whatever you just sent it!
(If you’re using source control (e.g. git) for your project, now is a good time to add secrets.yml to your.gitignore file [23].)
=> https://git-scm.com/docs/gitignore [23] .gitignore
### C’mon, Do Something
Of course… That’s not very interesting. We want our bot to say things!
To do that, we need to decide what sort of things it should say. ygg is a small piece of software [24] I’ve written to deal with this problem. It turns a description of what sort of sentences should be generated into a Javascript file that generates those sentences.
You can install ygg with
```bash
npm install --global @miguelmurca/ygg
```
=> https://github.com/mikeevmm/YGG [24] YGG
Now we need to describe our valid sentences. You can find a description of the syntax on ygg‘s page [25], but the core idea is we’re composing blocks that all reduce down to a sentence. ygg will try to give you helpful information if you get anything wrong, so don’t be afraid to experiment!
=> https://github.com/mikeevmm/YGG#syntax [25] YGG Syntax
For our current purposes, I’ve written a small grammar that will produce horoscope-like messages. Create a new file, grammar.ygg with the following:
```ygg
(
|("Today" "Tonight" "Tomorrow" "Soon")
" "
|(
("the " |("moon" "sun" "stars" "Earth" ))
|("Mercury" "Pluto" "Venus" "Mars")
)
" will be "
|("retrograde" "shining brightly" "in their house" "propitious")
"."
?(" This means " |("good things" "great danger") "!")
)
```
Now we can compile this into something we can use in our handler.js file by calling ygg :
```bash
ygg grammar.ygg grammar.js
```
This should produce a new file, grammar.js, which exports a function, generate, that we can call from our handler.js file to get a response for our bot:
```javascript
const bent = require('bent');
// Let's require the newly created `grammar.js` file;
// Note the ./ !
const grammar = require('./grammar');
const TELEGRAM_KEY = process.env.TELEGRAM_KEY;
const telegram = bent(`https://api.telegram.org/bot${TELEGRAM_KEY}/`, 'POST', 'json', 200);
async function sendBotMessage(chat_id, text) {
const options = {chat_id: chat_id, text: text};
return telegram('sendMessage', options);
}
async function webhook(event) {
const body = JSON.parse(event.body);
const message_object = body.message;
const text = message_object.text;
const chat_id = message_object.chat.id;
// We generate a valid response via the `generate`
// function in our `grammar.js` file; notice that
// we pass in the input! This will allow you to
// change the generated answers depending on input
// patterns. See `ygg`'s documentation for more
// information on this (the `&` pattern).
const response = grammar.generate(text);
// Now we send the response back!
await sendBotMessage(chat_id, response);
return {statusCode: 200};
}
module.exports = {webhook};
```
That’s it! All that’s left to do is upload our updated code to the cloud; but before that, and like before, you can exclude grammar.ygg from the files to be uploaded, as it’s not needed there:
```yaml
package:
patterns:
- '!setWebhook.js'
- '!grammar.ygg'
```
And so finally:
```bash
sls deploy
```
Success! You should be the proud owner of a bot that says weird stuff! Try messaging your bot, and see if it replies with a sentence from your grammar.
### Conclusion
If you got this far, congratulations! You are now ready to look at Telegram’s Bot API page [26] and venture on in making more sophisticated Telegram bots, or maybe try your hand at making a Twitter bot?
Before you go, I’d like to give you a final tip; as you might have noticed, sls deploy can take quite a bit to run. This is because this command triggers an update of your whole Lambda service. If all you’ve done is changed source code, you can speed up the update processing by signaling npm-serverless that that’s the case, with
```
sls deploy function --function telegram
```
(or whatever function name you have defined in your serverless.yml)
=> https://core.telegram.org/bots/api [26] Telegram’s Bot API page
And that’s really it! Best of luck in your future bot endeavours.
If you enjoyed this post, consider:
=> https://www.paypal.me/miguelmurca/2.50 buying me a coffee
=> https://github.com/mikeevmm/ checking out my Github profile, or
=> mailto:zvthryzhepn+rot13@tznvy.pbz just dropping me a line.
By Miguel M. on May 18, 2021.
Combining Selected Applications' Sound With Voice Into Virtual Microphone (Linux)2021-01-27T00:00:00Ztag:miguelmurca.flounder.online,2021-01-27:/gemlog/2021-01-27joinaudio.gmi# Combining Selected Applications' Sound With Voice Into Virtual Microphone (Linux)
> Sometimes I'll want to share music with friends via Skype, Discord, etc.
>
> I don't want to share the full output of my sound card (which I can do by setting the corresponding monitor as the microphone), because then my friends can hear themselves (which is annoying), and there might be other sounds playing besides the music (e.g. notifications) that I wouldn't want them to hear either. Also, when I do this, I am no longer able to speak in the call (because my microphone is set to something else).
>
> How can I mix the output sound of some applications with the sound of my microphone, and send that signal over voice chat?
>
> I'm using Linux Mint.
(Below I'll assume the use of a Ubuntu-like Linux distribution --- in my case I'm using Linux Mint 20.1 Cinnamon --- and that Pulse Audio Volume Control is installed; you can install it with sudo apt install pavucontrol. This should work for any Linux distro if you're not doing something too weird, but I make no guarantees. I also figured out the following as I went, so apologies for any inaccuracies.)
First, let me go over some Pulse Audio related concepts, that will make understanding the explanation below easier:
* A sink is something that "consumes" audio signal (presumably transforming that signal into something like actual sound). This means your speaker, for example, is a sink. You can also refer to these as output devices.
* A source is the converse concept; it's something that produces signal, like a microphone. You can also refer to these as input devices.
* A monitor is a fictitious (virtual) input device (source) that is created for every output device (sink) on your computer. It simply reproduces the sound that is being output on the corresponding source as if it were coming into the computer.
You should also know that, when working with Pulse Audio, there are three main tools: the Pulse Audio Volume Control GUI interface, which you can start from a terminal with pavucontrol, the pactl [1] command, and the pacmd [2] command.
=> https://linux.die.net/man/1/pactl [1] pactl
=> https://linux.die.net/man/1/pacmd [2] pacmd
Most of the functionality (that we're interested in) of the pactl command is achieved via built-in "modules". You can call these modules with
```
pactl load-module <module name> <parameters>
```
and deactivate them/reverse their effects with
```
pactl unload-module <module name>
```
The changes made with this aren't permanent; in a panic just log out and log in again, and you should be able to start fresh.
pactl 's feedback is also pretty poor --- it will semi-silently fail with "Failure: Module initialization failed" whenever there's something wrong with your command --- but you can find a pretty good documentation reference here [3].
=> https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules [3] PulseAudio modules
OK, so with that, here are the ingredients that we have available and the game plan:
* We can create so-called "null sinks". These are virtual sink devices that are capable of receiving signal as if they were, e.g., speakers, but don't play sound. These are created with the module-null-sink module [4].
* We can "loopback" the sound from a source back into a sink. Think of this as forwarding, for example, the sound coming in from your microphone directly into your speakers. This is achieved with the module-loopback module [5].
* We can select what sink to play the sound of each application into using the pavucontrol interface; whenever there is more than one sink, you can select what sink to use for each application in the Playback tab.
=> https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules#module-null-sink [4] null sink module
=> https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules#module-loopback [5] loopback module
The plan, then, is the following:
* Find out are our sound card's source (built-in microphone) and sink (speakers/headphones)
* Create a null sink device (transmit) that receives the sound of the applications we want to broadcast
* Loopback the sound from transmit to our sound card, so that we, too, can hear what we're broadcasting (because, remember, null sinks will just drop what they receive)
* Create another null sink device (combined) that receives the sound of transmit and the sound of the microphone --- this is the signal we want to broadcast, but not what we want to hear, since we don't want to hear ourselves speaking
* Use the monitor of combined as our virtual microphone in the Skype call
So that putting the plan into action:
### 0. Finding out our built-in sinks/sources
pactl lets us enumerate our sinks/sources easily:
```
pactl list sinks
```
tells us about a single sink
```
Sink #0
State: IDLE
Name: alsa_output.pci-0000_00_1f.3.analog-stereo
Description: Built-in Audio Analog Stereo
(... more information ...)
```
which is my sound-card's output, i.e., the laptop's speaker or headphones, and we can find the sources with
```
pactl list sources
```
which yields
```
Source #0
State: RUNNING
Name: alsa_output.pci-0000_00_1f.3.analog-stereo.monitor
Description: Monitor of Built-in Audio Analog Stereo
(... more information ...)
Properties:
device.description = "Monitor of Built-in Audio Analog Stereo"
device.class = "monitor"
alsa.card = "0"
alsa.card_name = "HDA Intel PCH"
(... more information ...)
Source #1
State: RUNNING
Name: alsa_input.pci-0000_00_1f.3.analog-stereo
Description: Built-in Audio Analog Stereo
(... more information ...)
```
I have two sources setup on my computer: alsa_input.pci-0000_00_1f.3.analog-stereo, which is my built-in microphone (coming from the sound-card), and alsa_output.pci-0000_00_1f.3.analog-stereo.monitor which is the monitor of sink alsa_output.pci-0000_00_1f.3.analog-stereo, i.e., the sound coming out of my speakers or headphones as if it were coming into the PC.
### 1. Create the transmit sink
This is straightforward to do with module-null-sink :
```
pactl load-module module-null-sink sink_name=transmit
```
Note that we've named this new sink transmit, but if you look at pavucontrol, under the Ouput Devices tab, you'll find that there is indeed a new device, but that it's called "Null Output". This is because pavucontrol displays the device's description as the display name; if we enumerate our sinks again...
```
$ pactl list sinks
(... other sinks ...)
Sink #6
State: IDLE
Name: transmit
Description: Null Output
(... more information ...)
Properties:
device.description = "Null Output"
device.class = "abstract"
device.icon_name = "audio-card"
Formats:
pcm
```
we can see that the sink is indeed called transmit, but its device.description property says "Null Output". We can fix that with pacmd :
```
pacmd 'update-sink-proplist transmit device.description="Signals to Transmit"'
```
(If you're getting "Failed to parse proplist." back, note the single quotes [6].)
=> https://gitlab.freedesktop.org/pulseaudio/pulseaudio/-/issues/615 [6] "Failed to parse proplist"
When we created our null sink transmit, a corresponding monitor source, transmit.monitor, was created as well. (You can check this by calling pactl list sources again.) We should fix its name as well, which we can again do with pacmd :
```
pacmd 'update-source-proplist transmit.monitor device.description="Monitor of Signals to Transmit"'
```
Now if you play some music, for example, you'll be able to forward that signal to the new sink under the Playback tag of pavucontrol. That signal is no longer audible though, since it's being played to a null sink; let's fix that.
### 2. Loopback the sound from transmit
Recall that sound is played out your speakers/headphones when sent to (in my case) sink 0, alsa_output.pci-0000_00_1f.3.analog-stereo. Then if we loopback the sound going into sink transmit into this sink, we should be able to hear it again. Of course, we can't loopback signal from a sink, signal must come from a source. Luckily, monitors are precisely the signal going into a sink, presented as source.
Then let's loopback transmit.monitor to our sound-card, using module-loopback [7] :
```
pactl load-module module-loopback source=transmit.monitor sink=alsa_output.pci-0000_00_1f.3.analog-stereo
```
You should now be able to hear the sounds that you send to the "Signals to Transmit" sink again.
=> https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules#module-loopback [7] loopback module
### 3. Combine transmit and the microphone
The procedure now is very similar to points 1 and 2; we'll create another null sink that receives both the transmit.monitor signal, and the microphone input. It's the monitor of this signal that will serve as the virtual microphone to use.
We start by creating the null sink combined...
```
pactl load-module module-null-sink sink_name=combined
```
... and fixing the default names that appear in pavucontrol...
```
pacmd 'update-sink-proplist combined device.description="Transmit+Microphone Sink"'
pacmd 'update-source-proplist combined.monitor device.description="Transmit+Microphone"'
```
... and finally loopbacking both our microphone and transmit monitor into the combined channel:
```
pactl load-module module-loopback source=alsa_input.pci-0000_00_1f.3.analog-stereo sink=combined
pactl load-module module-loopback source=transmit.monitor sink=combined
```
### 4. Profit
Now, when setting up a call, a microphone named "Transmit+Microphone" should be available --- this is the combined signal of your selected sounds and your voice.
Note that all this loopbacking and such may incur in a CPU overhead, but my laptop is not very powerful at all and I had no problems, other than some latency. To undo all of the above, call
```
pactl unload-module module-loopback
pactl unload-module module-null-sink
```
or log off and on again.
## TL;DR
```
pactl load-module module-null-sink sink_name=transmit
pacmd 'update-sink-proplist transmit device.description="Signals to Transmit"'
pacmd 'update-source-proplist transmit.monitor device.description="Monitor of Signals to Transmit"'
pactl load-module module-null-sink sink_name=combined
pacmd 'update-sink-proplist combined device.description="Transmit+Microphone Sink"'
pacmd 'update-source-proplist combined.monitor device.description="Transmit+Microphone"'
pactl load-module module-loopback source=alsa_input.pci-0000_00_1f.3.analog-stereo sink=combined
pactl load-module module-loopback source=transmit.monitor sink=combined
pactl load-module module-loopback source=transmit.monitor sink=alsa_output.pci-0000_00_1f.3.analog-stereo
```
if this failed for you, read the post, but alsa_input.pci-0000_00_1f.3.analog-stereo may be something different for you.
By Miguel M., 27 Jan. 2021