From c6019b67ac7c04f3d823b2623a73c2cafe90d487 Mon Sep 17 00:00:00 2001 From: Mafyuh Date: Tue, 26 Mar 2024 19:39:56 +0000 Subject: [PATCH] done with spl post --- archives/index.html | 2 +- index.json | 2 +- posts/index.html | 2 +- posts/spl-token-cli/index.html | 16 +++++++++------- tags/homelab/index.html | 2 +- 5 files changed, 13 insertions(+), 11 deletions(-) diff --git a/archives/index.html b/archives/index.html index 817269b..1e0268c 100644 --- a/archives/index.html +++ b/archives/index.html @@ -210,7 +210,7 @@

How to create a Solana Token (SPL) from CLI with metadata

-
March 15, 2024 · 9 min · 1708 words · Matt
+
March 15, 2024 · 9 min · 1723 words · Matt
diff --git a/index.json b/index.json index 0061899..b4694a5 100644 --- a/index.json +++ b/index.json @@ -1 +1 @@ -[{"content":"I wanted to create an SPL token and after looking online I couldn\u0026rsquo;t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain. Everything is done from the CLI.\nThis guide just covers the basics, the tools used are way more powerful than what I use them for, this is just creating a basic token with no taxes or locked supply or anything complex, but these tools do support those options. If you are interested in doing more I would read the proper documentation.\nhttps://docs.solanalabs.com/cli/install https://metaboss.rs/overview.html https://spl.solana.com/token NetworkChuck has a video from late 2021 on doing this, but some commands are a bit outdated, and Solana updated their entire metadata process in 2022.\nI am using an Ubuntu 22.04 VM with 60GB storage to run these commands.\nStarting balance: 0.079975 SOL Ending balance: 0.05731652 SOL Total SOL cost: 0.02265848 SOL ($4.22 on 3/15/2024) Installing Solana Tools First we need to download Solana tools to our system:\nsh -c \u0026#34;$(curl -sSfL https://release.solana.com/stable/install)\u0026#34; then run the export path command that is given to you:\nexport PATH=\u0026#34;/home/mafyuh/.local/share/solana/install/active_release/bin:$PATH\u0026#34; Restart your terminal session.\nCreating Wallet We will create a new SOL wallet to fund our token. To do this run:\nsolana-keygen new You don\u0026rsquo;t have to put a passphrase if you don\u0026rsquo;t want to. I would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time.\nKeep note of the keypair directory for later step.\nCheck your SOL balance with:\nsolana balance Install Rust We need Rust in order to create the token, to install Rust run:\ncurl --proto \u0026#39;=https\u0026#39; --tlsv1.2 -sSf https://sh.rustup.rs | sh Press enter for default installation. Once completed, restart your session again.\nThen we need to install some needed packages:\nsudo apt install libudev-dev llvm libclang-dev libssl-dev pkg-config build-essential protobuf-compiler -y Install spl-token-cli Now using Rust we are gonna install Solana\u0026rsquo;s CLI tools, this will take a few minutes.\ncargo install spl-token-cli Create Token Creating a new token is simple, make sure your wallet is funded with SOL and just run:\nspl-token create-token Your token\u0026rsquo;s address will be printed on screen. You will use this address in pretty much all the rest of the steps so keep handy.\nNote this creates a 9 decimal token, with no extensions, if you want to change this and add complexity to the token check out this\nIf you want to create a token with different that 9 decimals use:\nspl-token create-token --decimals \u0026lt;# of decimals\u0026gt; For a list of all things you can do run:\nspl-token create-token --help Now we need to create a token account for this token:\nspl-token create-account \u0026lt;TOKEN_ADDRESS\u0026gt; Example:\nspl-token create-account 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr If you get errors like:\n\u0026ldquo;unable to confirm transaction. This can happen in situations such as transaction expiration and insufficient fee-payer funds\u0026rdquo;\nYou just need to retry a few times, it will eventually go thru but sometimes takes 3-4 runs.\nMinting Tokens Now that you have a token and an account for the token, you can actually mint some tokens. To do this run:\nspl-token mint \u0026lt;TOKEN_ADDRESS\u0026gt; \u0026lt;# of tokens\u0026gt; \u0026lt;ACCOUNT_ADDRESS\u0026gt; Example:\nspl-token mint 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 CkaGbdriXVMHtzFBPtnpDjQvZ9gM9bwd8XdTTKR2Wx32 To see your tokens you can run:\nspl-token accounts Now you will want to send these tokens to a new address, so make a new wallet and get its pubkey, then to send these tokens run:\nspl-token transfer --fund-recipient --allow-unfunded-recipient \u0026lt;TOKEN_ADDRESS\u0026gt; \u0026lt;# of tokens\u0026gt; \u0026lt;NEW_ADDRESS\u0026gt; Example:\nspl-token transfer --fund-recipient --allow-unfunded-recipient 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 2DDyEt5N4y77ETWhhUmkZiympQbpjkfrt8FcMKhB1iWU Installing Metaboss Once this completes you can install metaboss which is needed to upload metadata. You can try to use spl-token built in metadata uploader as well, using \u0026ndash;enable-metadata and initialize-metadata during token creation, but I couldn\u0026rsquo;t get this to work. Metaboss worked 1st try, again, this takes some time:\ncargo install metaboss Arweave/Github While we wait on metaboss to install, we should start uploading our tokens Logo to a cloud provider, I use Arweave in this example but you can use anything really. There are also many ways to upload to arweave so this is just a friendly example thats free.\nFirst create an account at https://akord.com/use-arweave Upload your image to a new vault. (PNG) Click on the information icon next to your image and copy the arweave.net URL. (Not under Share) We need this for our JSON file we will create next.\nNow you can create a json file, and in it paste the following:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Description of token\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;https://arweave.net/image-url-from-above\u0026#34; } If you want metadata extensions use:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Small description of your token.\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;https://arweave.net/image-url-from-above\u0026#34;, \u0026#34;extensions\u0026#34;: { \u0026#34;website\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;twitter\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;telegram\u0026#34;: \u0026#34;\u0026#34; } } Now save this file with .json extension and upload it to Arweave just like the image. Now we need this JSON file\u0026rsquo;s Arweave link. Copy it from akord and create a new json file in your Solana server\u0026rsquo;s working directory. Fill in the following:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;uri\u0026#34;: \u0026#34;https://arweave.net/json-file-arweave-url\u0026#34; } Using the JSON file\u0026rsquo;s Arweave link as the URI. Name this file metadata.json.\nIf you are using Github, just make a new repo, upload the json file and image, copy the RAW url. URL should look like https://raw.githubusercontent.com/xxxxxxx. Probably easier to use Github tbh, especially if you don\u0026rsquo;t even know what Arweave is.\nCreating Metadata First we need to update our RPC URL, to set to mainnet run:\nsolana config set --url https://api.mainnet-beta.solana.com --keypair /home/mafyuh/.config/solana/id.json Filling in your keypair directory from earlier.\nNow that metaboss is installed, we just need to run 1 command to create our tokens metadata, again it may take a few tries:\nmetaboss create metadata -a \u0026lt;TOKEN_ADDRESS\u0026gt; -m metadata.json You should be able to go to solscan and see your updated metadata! It should appear in the SOL wallets soon after.\nUpdating Metadata If you ever need to update your metadata, you can do so by running:\nmetaboss update uri --keypair /home/mafyuh/.config/solana/id.json --account \u0026lt;TOKEN_ADDRESS\u0026gt; --new-uri https://arweave.net/new-arweave-json-url or you can just edit your existing json file.\nBONUS Creating a Market Now that you have a coin ready to go, you probably wanna get it listed so others can buy, I\u0026rsquo;ll try to make this process as cheap and easy as possible. Thanks to this Reddit post for finding these values.\nYou need to connect your wallet and have the tokens in the wallet that is connected for this to work, so either restore your private key or send tokens to your wallet on PC.\nNote I would not create this small of a market for a production coin, as what you are paying for when creating a market is essentially space on the blockchain for all your transactions. Long term projects should certainly not pay this little for a market, probably only good for smaller meme coins. If you are planning a long-term project you should probably be paying a few SOL for your market fee.\nRaydium has some good docs on how to create a market and pool, I would review these docs as well.\nFirst go to https://openbook-explorer.xyz/market/create Click Existing under mints Base Mint: Your token address Quote Mint: So11111111111111111111111111111111111111112 (this is swapping for SOL) Under Mints , since by default our token was 9 decimals, we will set these values Min Order size: 0.1 Price Tick: 0.99999998 or 0.99999999 Under advanced options check use advanced options. (this is what we are paying for, if long-term pay the 2.78 SOL) Event Queue Length: 128 Request Queue Length: 63 Orderbook Length: 201 At this time the cost to create this market is 0.32 SOL. Keep note of the market address.\nBONUS Creating Pool Now that we have a market, we need to create a pool. I\u0026rsquo;ve found Raydium to be the cheapest fee, but I would not cheap out on how much SOL you delegate to the pool as this is gonna be your liquidity, and having almost no liquidity is gonna be big red flag. But I have in the past just delegated .1 SOL and it worked, but trust me this is not gonna work out well.\nFirst go to https://raydium.io/liquidity/create/ Connect Wallet Paste Market ID Under Price and initial liquidity What we are doing here is setting our tokens starting price, the amount of tokens you put in the pool at the start decides how much they\u0026rsquo;re worth compared to SOL. All your tokenomics and things like this should probably already be done at this point, unless you\u0026rsquo;re just YOLO\u0026rsquo;ing it like I did. This is by far the most costly part of the process. Set a certain start time if you want. Hit Initialize Liquidity Pool and confirm in your wallet. The total fee currently is .68 SOL to create this pool.\nYou will recieve all the LP tokens in your wallet.\nBurning LP/ Revoke Authority You will probably want to burn these LP tokens so buyers won\u0026rsquo;t be scared off. There are many ways to do this, you can use the cli using this command:\nspl-token burn \u0026lt;TOKEN-ACCOUNT-ADDRESS\u0026gt; \u0026lt;AMOUNT\u0026gt; You can get the address on Solscan. Some wallets like solflare allow you to burn tokens thru the wallet. Or you can use online services like https://sol-incinerator.com/\nYou will also want to revoke mint authority as well as freeze authority by running:\nspl-token authorize \u0026lt;token_address\u0026gt; freeze --disable And for mint authority:\nspl-token authorize \u0026lt;token_address\u0026gt; mint --disable If you want to get your price to show on the wallets, you need to get listed on CoinGecko. There\u0026rsquo;s a bunch of requirements, to apply here is a link.\nTo get listed on Jupiter, they will automatically list your token once it hits some benchmarks which can be found here\nNow you just need to start your social media campaigns and best of luck! You can send your boy some of your tokens as thanks @ 3RYPrKxC6BNv3XUMf8Cyjg36pw6Qu1txRvqq6LNq9Psj\nTotal in Fees: 1 SOL (plus your liquidity)\nHope this guide has helped you save some $ when creating your Solana tokens!\n","permalink":"https://mafyuh.com/posts/spl-token-cli/","summary":"I wanted to create an SPL token and after looking online I couldn\u0026rsquo;t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain.","title":"How to create a Solana Token (SPL) from CLI with metadata"},{"content":"This guide is for someone who is looking to setup an Arr Stack for media organization and downloading. This guide requires no remote path mappings, follows Trash-Guides recommendations and every command needed is copy-pasteable. The VM\u0026rsquo;s in this guide are hosted on Proxmox 8.1.4, but you can use any Ubuntu environment (WSL-2, VirtualBox, etc.)\nArr VM Specs:\n2 core host 8GB RAM 100GB Storage Downloader VM Specs:\n2 core host 4GB RAM 250GB Storage (can download up to this limit at a time, be careful when mass downloading or give plenty of space) Prerequisites Ubuntu 22.04 Any Usenet Server Subscription (preferred) Any Usenet Indexer Subscription (preferred) Real-Debrid Subscription (if you like torrents being fast) VPN Subscription (Bare minimum needed to download torrents) Folder Structure Setup Run this command to make all folders, following TRASH-guides recommended naming scheme:\nsudo mkdir -p /data/torrents/{books,movies,music,tv} /data/usenet/{incomplete,complete/{books,movies,music,tv}} /data/media/{books,movies,music,tv} Mounting NAS I use my NAS for storing all my content, this allows me to have 1 spot to have everything saved too, and not getting tripped up with different file systems. You do not need a NAS, and can just skip this part of guide and use the local filesystem. I use TrueNAS Scale with SMB. In order to mount SMB shares to Linux filesystem we need to install CIFS:\nsudo apt install cifs-utils -y then we need to tell the system which directory to map it to, to do this:\nsudo nano /etc/fstab at the end of the file, add an entry for your NAS as such:\n//\u0026lt;NAS IP\u0026gt;/\u0026lt;NAS Share\u0026gt; /data/media cifs username=\u0026lt;user\u0026gt;,password=\u0026lt;pass\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 be sure to replace your credentials.\nTo mount your NAS, you can run:\nsudo mount -a then run the following to make sure your NAS is mounted:\nls /data/media Everything in your NAS should be showing now, but we need to set permissions, to do that run:\nsudo chown -R $USER:$USER /data sudo chmod -R a=,a+rX,u+w,g+w /data Install Docker Now we have to install Docker, I use this command to install Docker and Docker Engine:\ncurl -fsSL https://get.docker.com | sudo sh Now that docker is installed, we can add our user to the docker group so we dont have to use sudo every command:\nsudo usermod -aG docker $USER Now I would make a docker directory to store all your appdata, you can use your home directory if you want, but trash-guides recommend not doing so:\nsudo mkdir -p /docker/appdata/{radarr,sonarr,bazarr,prowlarr,lidarr,sabnzbd,qbitty,rdt} Then set permissions on the docker directory:\nsudo chown -R $USER:$USER /docker sudo chmod -R a=,a+rX,u+w,g+w /docker 2 VM Setup I have my downloaders (Sab, Qbitty, Rdt-client) on a different VM than my ARR\u0026rsquo;s, this is cause when I had everything on 1 docker host, I would have constant HTTP errors from Sab mainly, and as Sab is where I get most of my media, I decided to move to another VM, and then SMB share the download directories over to my ARR\u0026rsquo;s VM.\nYou do not have to do this, you can just have 1 docker host, up to you. It is alot less work to do all in one 1 VM.\nIf you do this, you need to replicate the origin setup, making all the same directories, then run:\nsudo apt update sudo apt install samba We need to configure Samba to tell it what we are sharing:\nsudo nano /etc/samba/smb.conf Add the following at the end of this file:\n[usenet] path = /data/usenet read only = no guest ok = no create mask = 0755 [torrents] path = /data/torrents read only = no guest ok = no create mask = 0755 To create your username and password, replace your_username with your actual username:\nsudo smbpasswd -a your_username Then restart samba with:\nsudo systemctl restart smbd Go back to your Arr VM and add the following to your /etc/fstab:\n//\u0026lt;nas-ip\u0026gt;/usenet /data/usenet cifs username=\u0026lt;username\u0026gt;,password=\u0026lt;password\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 //\u0026lt;nas-ip\u0026gt;/torrents /data/torrents cifs username=\u0026lt;username\u0026gt;,password=\u0026lt;password\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 Mount them with:\nsudo mount -a Then re-run our permissions command:\nsudo chown -R $USER:$USER /data sudo chmod -R a=,a+rX,u+w,g+w /data I would reboot this VM at this point, this will make sure it auto connects to our SMB shares at boot.\nDocker Compose Files Now that everything is setup, we can actually install the services:\nOne VM This is a full docker compose file for pretty much all major Arr\u0026rsquo;s and downloaders I use. I threw Lidarr in here as well, as I run Lidarr for music, but if you dont care about music you can remove lidarr:\nversion: \u0026#34;3.9\u0026#34; services: sabnzbd: image: lscr.io/linuxserver/sabnzbd:latest container_name: sabnzbd environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sabnzbd:/config - /data/usenet:/data/usenet:rw ports: - 8080:8080 restart: unless-stopped arch-qbittorrentvpn: image: binhex/arch-qbittorrentvpn:latest container_name: qbittorrentvpn volumes: - \u0026#39;/docker/appdata/qbitty:/config\u0026#39; - \u0026#39;/data/torrents/:/data/torrents\u0026#39; - \u0026#39;/etc/localtime:/etc/localtime:ro\u0026#39; ports: - \u0026#39;49550:49550\u0026#39; - \u0026#39;49551:8118\u0026#39; environment: - VPN_ENABLED=yes - VPN_PROV=protonvpn - VPN_CLIENT=wireguard - VPN_USER=username+pmp - VPN_PASS= - STRICT_PORT_FORWARD=yes - LAN_NETWORK=10.0.0.0/24 - ENABLE_PRIVOXY=yes - PUID=1000 - PGID=1000 - WEBUI_PORT=49550 - UMASK=1000 - DEBUG=false cap_add: - NET_ADMIN sysctls: - net.ipv4.conf.all.src_valid_mark=1 privileged: true network_mode: bridge restart: unless-stopped rdtclient: container_name: rdtclient volumes: - \u0026#39;/data/torrents:/data/torrents\u0026#39; - \u0026#39;/docker/appdata/rdt:/data/db\u0026#39; image: rogerfar/rdtclient restart: always logging: driver: json-file options: max-size: 10m ports: - \u0026#39;6500:6500\u0026#39; bazarr: image: lscr.io/linuxserver/bazarr:latest ports: - \u0026#34;6767:6767\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/bazarr:/config - /data/media:/data/media restart: unless-stopped environment: - PUID=1000 - PGID=1000 lidarr: image: lscr.io/linuxserver/lidarr:latest ports: - \u0026#34;8686:8686\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/lidarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 prowlarr: image: lscr.io/linuxserver/prowlarr:latest ports: - \u0026#34;9696:9696\u0026#34; volumes: - /docker/appdata/prowlarr:/config restart: unless-stopped environment: - PUID=1000 - PGID=1000 radarr: image: lscr.io/linuxserver/radarr:latest ports: - \u0026#34;7878:7878\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/radarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 sonarr: image: lscr.io/linuxserver/sonarr:latest ports: - \u0026#34;8989:8989\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sonarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 networks: default: name: arrs_default 2 VM Arrs:\nversion: \u0026#34;3.7\u0026#34; services: bazarr: image: lscr.io/linuxserver/bazarr:latest ports: - \u0026#34;6767:6767\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/bazarr:/config - /data/media:/data/media restart: unless-stopped environment: - PUID=1000 - PGID=1000 lidarr: image: lscr.io/linuxserver/lidarr:latest ports: - \u0026#34;8686:8686\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/lidarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 prowlarr: image: lscr.io/linuxserver/prowlarr:latest ports: - \u0026#34;9696:9696\u0026#34; volumes: - /docker/appdata/prowlarr:/config restart: unless-stopped environment: - PUID=1000 - PGID=1000 radarr: image: lscr.io/linuxserver/radarr:latest ports: - \u0026#34;7878:7878\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/radarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 sonarr: image: lscr.io/linuxserver/sonarr:latest ports: - \u0026#34;8989:8989\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sonarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 networks: default: name: arrs_default Downloaders: As stated previously, Sab downloads most of my content (95%), you do not need all 3 of these, you can just copy the Sab part and just use Usenet with Sab. But I like to have a variety.\nversion: \u0026#39;3.9\u0026#39; services: sabnzbd: image: lscr.io/linuxserver/sabnzbd:latest container_name: sabnzbd environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sabnzbd:/config - /data/usenet:/data/usenet:rw ports: - 8080:8080 restart: unless-stopped arch-qbittorrentvpn: image: binhex/arch-qbittorrentvpn:latest container_name: qbittorrentvpn volumes: - \u0026#39;/docker/appdata/qbitty:/config\u0026#39; - \u0026#39;/data/torrents/:/data/torrents\u0026#39; - \u0026#39;/etc/localtime:/etc/localtime:ro\u0026#39; ports: - \u0026#39;49550:49550\u0026#39; - \u0026#39;49551:8118\u0026#39; environment: - VPN_ENABLED=yes - VPN_PROV=protonvpn - VPN_CLIENT=wireguard - VPN_USER=username+pmp - VPN_PASS= - STRICT_PORT_FORWARD=yes - LAN_NETWORK=10.0.0.0/24 - ENABLE_PRIVOXY=yes - PUID=1000 - PGID=1000 - WEBUI_PORT=49550 - UMASK=1000 - DEBUG=false cap_add: - NET_ADMIN sysctls: - net.ipv4.conf.all.src_valid_mark=1 privileged: true network_mode: bridge restart: unless-stopped rdtclient: container_name: rdtclient volumes: - \u0026#39;/data/torrents:/data/torrents\u0026#39; - \u0026#39;/docker/appdata/rdt:/data/db\u0026#39; image: rogerfar/rdtclient restart: always logging: driver: json-file options: max-size: 10m ports: - \u0026#39;6500:6500\u0026#39; Running Docker Compose Files In order to run these files, it depends on which option you chose, if 1 VM setup, just copy the compose file and create a new docker-compose.yml file with:\nnano docker-compose.yml Paste in the content, CTRL + X to exit nano, Y to save, ENTER to keep filename. Then run:\ndocker compose up -d If you are using 2 VM\u0026rsquo;s, you need to do this 2x. One for each docker-compose file.\nConclusion Congratulations on setting up your media library backend! We now have to go and configure all these services to work together, which I have another full blog post on which you can find here.\n","permalink":"https://mafyuh.com/posts/docker-arr-stack-guide/","summary":"This guide is for someone who is looking to setup an Arr Stack for media organization and downloading. This guide requires no remote path mappings, follows Trash-Guides recommendations and every command needed is copy-pasteable. The VM\u0026rsquo;s in this guide are hosted on Proxmox 8.1.4, but you can use any Ubuntu environment (WSL-2, VirtualBox, etc.)\nArr VM Specs:\n2 core host 8GB RAM 100GB Storage Downloader VM Specs:\n2 core host 4GB RAM 250GB Storage (can download up to this limit at a time, be careful when mass downloading or give plenty of space) Prerequisites Ubuntu 22.","title":"Docker Compose Arr Stack Guide"},{"content":"Hello! 👋 I\u0026rsquo;m Matt Reeves, a DevOps and GitOps enthusiast with a passion for self-hosting.\nBefore diving into the world of DevOps and GitOps, I honed my skills as an advanced electronics repair technician, tackling complex challenges with multimeters, oscilloscopes, and soldering irons. From troubleshooting intricate circuits to mastering surface-mount technology (SMT), I thrived on solving problems and learning what\u0026rsquo;s possible in electronics.\nWhile I continue to stay up-to-date with hardware, my focus has shifted more towards the software side of things. Just as I mastered the intricacies of hardware, I\u0026rsquo;m now determined to delve into the world of software and emerge as a master of DevOps, GitOps, and system administration. With the same dedication and hunger for knowledge that drove me in the realm of electronics, I\u0026rsquo;m excited to tackle the challenges of software development and infrastructure management head-on.\nWhat You\u0026rsquo;ll Find Here DevOps \u0026amp; GitOps: From CI/CD pipelines to Git-driven infrastructure. Self-Hosting: Managing my own homelab and orchestrating various services. Cybersecurity: How I keep my infrastructure safe and secure. Kubernetes \u0026amp; Docker: Pretty much everything I run is containerized. AI: I\u0026rsquo;m also passionate about artificial intelligence (AI), exploring self-hosted text-generation models like Phi, Llama-2, and Gemma, along with running image-generation Stable-Diffusion models. I show you how I integrate AI into various software projects and explore its potential impact. Other Hobbies \u0026amp; Interests MMA Junkie. I haven\u0026rsquo;t missed a major UFC event since I started watching in 2018. Milwaukee Bucks Fan. Born and raised in SE Wisconsin, been a Bucks fan my whole life. MCU Fanboy. I am a huge Marvel fan, especially Spider-Man. Gamer. I spend alot of time playing video games, for the last few years my main game has been Rocket League, as well as COD. But I play all styles of games. Pets. I have a dog named Knox who\u0026rsquo;s a husky-lab mix. I spend mostly all day everyday giving him pets. Why mafyuh? When I was 9 I needed a unique username for Google. My full name is Matthew, if you say mafyuh fast it sort-of sounds the same. Anyways, it stuck. The google account didn\u0026rsquo;t though :(\nPrivacy Policy Analytics I use Plausible for analytics, focusing on:\nPopular posts Optimal posting times User engagement Plausible collects minimal data:\nPage URL HTTP Referer Browser Operating system Device type Visitor Country Note Plausible uses JavaScript for tracking, allowing you to block it using browser extensions. Their code is open-source on GitHub.\n","permalink":"https://mafyuh.com/about/","summary":"Hello! 👋 I\u0026rsquo;m Matt Reeves, a DevOps and GitOps enthusiast with a passion for self-hosting.\nBefore diving into the world of DevOps and GitOps, I honed my skills as an advanced electronics repair technician, tackling complex challenges with multimeters, oscilloscopes, and soldering irons. From troubleshooting intricate circuits to mastering surface-mount technology (SMT), I thrived on solving problems and learning what\u0026rsquo;s possible in electronics.\nWhile I continue to stay up-to-date with hardware, my focus has shifted more towards the software side of things.","title":"About"},{"content":"Got questions, feedback, or just want to say hi? Feel free to reach out to me using the contact information below:\nEmail: admin[at]mafyuh[dot]com Discord Resume If you\u0026rsquo;re interested in my professional experience, you can download my resume below:\nDownload Resume ","permalink":"https://mafyuh.com/contact/","summary":"Got questions, feedback, or just want to say hi? Feel free to reach out to me using the contact information below:\nEmail: admin[at]mafyuh[dot]com Discord Resume If you\u0026rsquo;re interested in my professional experience, you can download my resume below:\nDownload Resume ","title":"Contact"},{"content":"Something I only got into recently is hosting video game servers for games that support servers. Maybe it\u0026rsquo;s just something about having another server, cause these are totally not needed. But they are pretty easy to setup thanks to the open-source community.\nSons of the Forest I wanted to play sons one day and when I looked into multiplayer I seen there were options for servers. This sparked me Googling and finding this repo.\nSetting this up took a bit, as the README was not very great. But I got it all figured out after reading GH Issues for who knows how long. Good old Linux permissions.\nHere is a link to the repo I used https://github.com/jammsen/docker-sons-of-the-forest-dedicated-server\nVM Details\nProxmox VM Ubuntu 22.04 Cloud image 4 core host 16GB RAM 100GB Storage First I created a sons folder in my home directory and cd into it. To make the games directories I run:\nmkdir game steamcmd winedata My docker-compose is the same as on GH, but it is as follows:\nversion: \u0026#39;3.9\u0026#39; services: sons-of-the-forest-dedicated-server: container_name: sons-of-the-forest-dedicated-server image: jammsen/sons-of-the-forest-dedicated-server:latest restart: always environment: ALWAYS_UPDATE_ON_START: 1 ports: - 8766:8766/udp - 27016:27016/udp - 9700:9700/udp volumes: - ./steamcmd:/steamcmd - ./game:/sonsoftheforest - ./winedata:/winedata This is in the sons folder.\nWhenever I go and play I enable the port forward rules in my pfSense. Then once I or a friend get off I disable the forwards. The logs from the container do state when in sleep mode, so I am thinking of an automation that when in sleep mode it\u0026rsquo;ll update my pfSense port forward. Maybe one day, but for now manually enable/disable. I do this as I dont want any port forwards on my network, if its just temporary like these it\u0026rsquo;s fine, but never leave a port forward open to game services if its inside your home network.\nPalworld When Palworld first came out I really wanted to mod actual Pokemon into the game, as I feel most of the Pals in the game look like AI generated garbage. But I\u0026rsquo;m no video game mod-dev and I dont see anything on the internet. (Who else loves Nintendo?) so I haven\u0026rsquo;t had this container spun up in awhile. I haven\u0026rsquo;t even played since launch, but I paid for the game and set up a server just cause.\nWhen I googled \u0026ldquo;Palworld server github\u0026rdquo;, I laughed cause the first result was the same dev as the sons server I run. I thought it was gonna be hard but they made this one simple, just follow his README.\nhttps://github.com/jammsen/docker-palworld-dedicated-server\nI run this container on the same VM as Sons, limiting IP reservations as well as vulnerable systems.\nSame thing goes for folder structure here, I just made a pal folder in home directory. I do the same thing with port forwards as I do for Sons\nThanks to the Developers on these repo\u0026rsquo;s for your work.\n","permalink":"https://mafyuh.com/posts/selfhosted-game-servers/","summary":"Something I only got into recently is hosting video game servers for games that support servers. Maybe it\u0026rsquo;s just something about having another server, cause these are totally not needed. But they are pretty easy to setup thanks to the open-source community.\nSons of the Forest I wanted to play sons one day and when I looked into multiplayer I seen there were options for servers. This sparked me Googling and finding this repo.","title":"Selfhosted Game Servers"},{"content":"1st step: Increase/resize disk from GUI console 2nd step: Extend physical drive partition and check free space with: sudo growpart /dev/sda 3 sudo pvdisplay sudo pvresize /dev/sda3 sudo pvdisplay 3rd step: Extend Logical volume sudo lvdisplay sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv sudo lvdisplay 4th step: Resize Filesystem sudo resize2fs /dev/ubuntu-vg/ubuntu-lv sudo fdisk -l ","permalink":"https://mafyuh.com/posts/resize-ubuntu-vm-disk/","summary":"1st step: Increase/resize disk from GUI console 2nd step: Extend physical drive partition and check free space with: sudo growpart /dev/sda 3 sudo pvdisplay sudo pvresize /dev/sda3 sudo pvdisplay 3rd step: Extend Logical volume sudo lvdisplay sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv sudo lvdisplay 4th step: Resize Filesystem sudo resize2fs /dev/ubuntu-vg/ubuntu-lv sudo fdisk -l ","title":"Resize Ubuntu VM Disk in Proxmox"},{"content":"This is just a quick guide on how to authenticate your authentik users with Proton using SimpleLogin OIDC.\nTo accomplish this, first create a SimpleLogin acct by logging in with Proton. Once thats done go to https://app.simplelogin.io/developer and create a website. Give it your authentik URL.\nThen go to Oauth Settings and copy your client ID and secret for next step. add your authentik URL in redirect URL like this https://auth.example.com/source/oauth/callback/simplelogin/ (simplelogin being slug of authentik)\nIn authentik go to Directory - Federation and Social login - Create and create an OpenID Oauth source\nName: SimpleLogin Slug: simplelogin User matching mode: i chose link with identical email Consumer key: Paste your key Consumer secret: Paste your secret authorization url: https://app.simplelogin.io/oauth2/authorize access token url: https://app.simplelogin.io/oauth2/token profile url: https://app.simplelogin.io/oauth2/userinfo OIDC Well-known URL: https://app.simplelogin.io/.well-known/openid-configuration\nFor logo, it appears authenik inverts your image, I dont know if its dark mode or bug but regardless here\u0026rsquo;s the regular and inverted image I used. Just right click and save image:\nNow go to Flows and Stages - Flows - choose your default authentication stage - click it then click stage bindings - Click edit stage to the right of your identification stage - expand Source settings and make sure you CTL + click your newly created SimpleLogin source.\nYou should be able to logout and try to to login with your Proton account!\n","permalink":"https://mafyuh.com/posts/proton-mail-authentik-social-login-setup/","summary":"This is just a quick guide on how to authenticate your authentik users with Proton using SimpleLogin OIDC.\nTo accomplish this, first create a SimpleLogin acct by logging in with Proton. Once thats done go to https://app.simplelogin.io/developer and create a website. Give it your authentik URL.\nThen go to Oauth Settings and copy your client ID and secret for next step. add your authentik URL in redirect URL like this https://auth.","title":"Proton Mail - SimpleLogin authentik Social Login Setup"},{"content":"I wanted a way to automate when users tell me a video on my Jellyfin server has an issue. After alot of trial and error, ChatGPT, Bard and I came up with this automation.\nRequirements My only requirements when making this was that it was free and self-hostable. Not even any NPM extensions are required in AP. Actual Software requirements are:\nSonarr Radarr Overseerr/Jellyseerr Optional\nSMTP server or ability to send SMTP messages (can also use discord) ActivePieces or any other automation platform that supports TS. (Zapier, n8n, etc) Here\u0026rsquo;s a great AP setup and how-to video:\nNote: I didn\u0026rsquo;t do any of the ngrok stuff. I just have Nginx Proxy manager setup with a wildcard certificate. Then just give a domain name and point and its ip:8080. No special Nginx config needed. Make sure you set AP_FRONTEND_URL in .env\nThis blog post is rather long, if you prefer to see the code on git you can find all this code here.\nHow it Works Whenever a user Reports an Issue in Jellyseerr, a webhook is sent to activepieces (AP) with the Issue data, this triggers the automation to mark as failed, delete file, re-search, refresh Jellyfin Libraries and Resolve the original issue with comment. There is an optional feature to approve or deny the automation.\nWorks across Radarr and Sonarr, as the issue reported can be either Movie or TV show.\nOnly caveat is if the issue is an entire Season , we just mark the issue as resolved and leave a comment saying to submit an issue for each episode individually\nWorks on my Jellyfin, Jellyseer, Radarr and Sonarr setup. I dont use Plex but all you would have to change is the Jellyfin Refresh Library Request to match Plex\u0026rsquo;s equivalent.\nHere is a pic of the full automation.\nEverything of value is logged to the console so check there for errors. Lets start breaking it down.\n#1 Jellyseer Issue Reported First thing is create a flow in AP, select a trigger, and search for webhook. This will give you the webhook URL for Jellyseerr. Next, in Jellyseerr, under Settings - Users - Default Permissions make sure Report Issues is checked and save changes. Then under Settings - Notifications - Webhook make a webhook notification, with the URL from AP, and just enabling Issue Reported and Issue Reopened. This should look as follows (dont worry about my payload showing mediaId, this has since been deleted)\nHere is my full JSON payload just in case:\n{ \u0026#34;notification_type\u0026#34;: \u0026#34;{{notification_type}}\u0026#34;, \u0026#34;event\u0026#34;: \u0026#34;{{event}}\u0026#34;, \u0026#34;subject\u0026#34;: \u0026#34;{{subject}}\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;{{message}}\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;{{image}}\u0026#34;, \u0026#34;{{media}}\u0026#34;: { \u0026#34;media_type\u0026#34;: \u0026#34;{{media_type}}\u0026#34;, \u0026#34;tmdbId\u0026#34;: \u0026#34;{{media_tmdbid}}\u0026#34;, \u0026#34;tvdbId\u0026#34;: \u0026#34;{{media_tvdbid}}\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;{{media_status}}\u0026#34;, \u0026#34;status4k\u0026#34;: \u0026#34;{{media_status4k}}\u0026#34; }, \u0026#34;{{request}}\u0026#34;: { \u0026#34;request_id\u0026#34;: \u0026#34;{{request_id}}\u0026#34;, \u0026#34;requestedBy_email\u0026#34;: \u0026#34;{{requestedBy_email}}\u0026#34;, \u0026#34;requestedBy_username\u0026#34;: \u0026#34;{{requestedBy_username}}\u0026#34;, \u0026#34;requestedBy_avatar\u0026#34;: \u0026#34;{{requestedBy_avatar}}\u0026#34;, \u0026#34;requestedBy_settings_discordId\u0026#34;: \u0026#34;{{requestedBy_settings_discordId}}\u0026#34;, \u0026#34;requestedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{requestedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{issue}}\u0026#34;: { \u0026#34;issue_id\u0026#34;: \u0026#34;{{issue_id}}\u0026#34;, \u0026#34;issue_type\u0026#34;: \u0026#34;{{issue_type}}\u0026#34;, \u0026#34;issue_status\u0026#34;: \u0026#34;{{issue_status}}\u0026#34;, \u0026#34;reportedBy_email\u0026#34;: \u0026#34;{{reportedBy_email}}\u0026#34;, \u0026#34;reportedBy_username\u0026#34;: \u0026#34;{{reportedBy_username}}\u0026#34;, \u0026#34;reportedBy_avatar\u0026#34;: \u0026#34;{{reportedBy_avatar}}\u0026#34;, \u0026#34;reportedBy_settings_discordId\u0026#34;: \u0026#34;{{reportedBy_settings_discordId}}\u0026#34;, \u0026#34;reportedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{reportedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{comment}}\u0026#34;: { \u0026#34;comment_message\u0026#34;: \u0026#34;{{comment_message}}\u0026#34;, \u0026#34;commentedBy_email\u0026#34;: \u0026#34;{{commentedBy_email}}\u0026#34;, \u0026#34;commentedBy_username\u0026#34;: \u0026#34;{{commentedBy_username}}\u0026#34;, \u0026#34;commentedBy_avatar\u0026#34;: \u0026#34;{{commentedBy_avatar}}\u0026#34;, \u0026#34;commentedBy_settings_discordId\u0026#34;: \u0026#34;{{commentedBy_settings_discordId}}\u0026#34;, \u0026#34;commentedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{commentedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{extra}}\u0026#34;: [] } You should be able to Report an issue on a random movie in Jellyseerr and then go to the webhook trigger and choose Generate sample data, and you should be able to see the data from the request. I recommend doing this and creating an issue for an example movie, TV series( All Seasons) and a TV Series (1 Season)\n(Optional) #2 Create Approval Links In AP add the next step and search Approval, then create approval links.\n(Optional) #3 Send Email This is so I can either approve or deny the file from being deleted. Maybe it\u0026rsquo;s a client issue and I know for a fact my file is good and I dont want deleted. Thus the links are sent to me along with the some data from the request, so I know what I am approving/denying.\nYou can use the core SMTP feature but its limited to text. I wanted some more customizability so I chose Resend (supports html) and set up an acct there with one of my extra domains.\nYou don\u0026rsquo;t have to use email, you can use Discord, SMS, any generic http request or whatever you want. I just use email since I pay for my domains and pay Proton Mail for emails so might as well use em.\nNot gonna get too into this, I dont care too much about it atm, customize your email to your liking, but I\u0026rsquo;ll post my somewhat working HTML body here. I literally just copied what Bard gave me, added in data from response and tested and said looks good enough, glitches on my mobile too.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width, initial-scale=1.0\u0026#34;\u0026gt; \u0026lt;title\u0026gt;Jellyseerr Issue Reported\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt; body { font-family: sans-serif; margin: 0; padding: 0; background-color: #222; color: #fff; } .container { width: 80%; margin: 0 auto; padding: 20px; background-color: #333; border-radius: 10px; box-shadow: 0px 2px 5px rgba(0, 0, 0, 0.1); } .header { display: flex; justify-content: space-between; align-items: center; padding-bottom: 20px; border-bottom: 1px solid #555; } .header h1 { font-size: 24px; font-weight: bold; margin: 0; color: #fff; } .header img { width: 50px; height: 50px; border-radius: 50%; object-fit: cover; } .content { margin: 0 auto; text-align: center; } .issue-subject { font-size: 18px; font-weight: bold; margin-bottom: 10px; color: #fff; } .issue-message { font-size: 16px; line-height: 1.5; margin-bottom: 20px; color: #ccc; } .issue-image { width: 100%; height: auto; margin-bottom: 20px; } .buttons { display: flex; justify-content: space-between; } .button { background-color: #007bff; color: #fff; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #0056b3; } .disapprove-button { background-color: #dc3545; color: #fff; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .disapprove-button:hover { background-color: #bd2830; } \u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;header\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;https://your-logo-url\u0026#34; alt=\u0026#34;Jellyseerr Logo\u0026#34;\u0026gt; \u0026lt;h1\u0026gt;Jellyseerr Issue Reported\u0026lt;/h1\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;content\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;issue-subject\u0026#34;\u0026gt; Jellyseerr Issue Reported \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;issue-message\u0026#34;\u0026gt; This issue was submitted by 1. Jellyseerr Issue Reported body issue reportedBy_username. \u0026lt;br\u0026gt; The reason for the issue:1. Jellyseerr Issue Reported body message \u0026lt;br\u0026gt; Please review the issue and take appropriate action. \u0026lt;br\u0026gt; \u0026lt;img src=\u0026#34; 1. Jellyseerr Issue Reported body image \u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;buttons\u0026#34;\u0026gt; \u0026lt;a href=\u0026#34;2. Create Approval Links approvalLink \u0026#34;\u0026gt;\u0026lt;button class=\u0026#34;button\u0026#34;\u0026gt;Approve\u0026lt;/button\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;a href=\u0026#34;2. Create Approval Links disapprovalLink \u0026#34;\u0026gt;\u0026lt;button class=\u0026#34;disapprove-button\u0026#34;\u0026gt;Deny\u0026lt;/button\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; And here\u0026rsquo;s what an email looks like:\n(Optional) #4 Wait for Approval Pauses flow until I approve or deny.\n#5 Radarr/Sonarr Branch As stated previously, I wanted this to work regardless if Movie or TV show. So using the core Branch feature we just say that if the media_type value from the issue contains the text movie, its true.\n#6 Radarr Mark As Failed All I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code (CASE SENSITIVE)\nHere is the code. Just replace api key and base url:\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key and base URL const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3/\u0026#39;; // Define a function to make API requests to Radarr const makeRadarrRequest = async (endpoint, method = \u0026#39;GET\u0026#39;) =\u0026gt; { const apiUrl = radarrBaseUrl + endpoint; console.log(`Calling Radarr API: ${apiUrl}`); const response = await fetch(apiUrl, { method, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (response.ok) { return await response.json(); } else { console.error(`Radarr API request failed: ${response.statusText}`); return null; } }; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID const radarrApiResponseData = await makeRadarrRequest(`movie?tmdbId=${tmdbId}`); if (radarrApiResponseData \u0026amp;\u0026amp; radarrApiResponseData.length \u0026gt; 0) { const movieId = radarrApiResponseData[0].id; // Get the Radarr ID of the first movie console.log(\u0026#39;Radarr Movie ID:\u0026#39;, movieId); // Use the Radarr movie ID to get the history of the movie const historyApiResponseData = await makeRadarrRequest(`history/movie?movieId=${movieId}`); if (historyApiResponseData \u0026amp;\u0026amp; historyApiResponseData.length \u0026gt; 0) { const historyId = historyApiResponseData[0].id; // Get the history ID console.log(\u0026#39;History ID:\u0026#39;, historyId); // Use the history ID to mark the movie as failed const markFailedResponse = await makeRadarrRequest(`history/failed/${historyId}`, \u0026#39;POST\u0026#39;); if (markFailedResponse) { console.log(\u0026#39;Movie successfully marked as failed.\u0026#39;); } else { console.error(\u0026#39;Failed to mark movie as failed\u0026#39;); } } else { console.error(\u0026#39;No history found for movie ID:\u0026#39;, movieId); } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } }; #7 Delay 5 seconds Give time to process.\n#8 Delete Movie File I didn\u0026rsquo;t want to delete the actual movie from Radarr, but just the file itself, thus alot of trial and error, but a working script. All I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3\u0026#39;; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID and get the Radarr ID const radarrApiUrl = `${radarrBaseUrl}/movie?tmdbId=${tmdbId}`; console.log(\u0026#39;Calling Radarr API to look up the movie...\u0026#39;); const radarrApiResponse = await fetch(radarrApiUrl, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (radarrApiResponse.ok) { console.log(\u0026#39;Radarr API lookup successful.\u0026#39;); const radarrApiResponseData = await radarrApiResponse.json(); if (radarrApiResponseData.length \u0026gt; 0) { // If the response is an array, you should loop through the results // and access the Radarr ID for each movie. for (const movie of radarrApiResponseData) { const radarrMovieId = movie.movieFile.id; console.log(\u0026#39;Radarr Movie ID:\u0026#39;, radarrMovieId); // Use the Radarr movie ID to delete the corresponding movie file const deleteMovieFileUrl = `${radarrBaseUrl}/movieFile/${radarrMovieId}`; console.log(`Calling Radarr API to delete movie file: ${deleteMovieFileUrl}`); const deleteMovieFileResponse = await fetch(deleteMovieFileUrl, { method: \u0026#39;DELETE\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (deleteMovieFileResponse.ok) { console.log(`Movie file successfully deleted.`); } else { console.error(`Failed to delete movie file: ${deleteMovieFileResponse.statusText}`); } } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } else { console.error(\u0026#39;Radarr API lookup failed:\u0026#39;, radarrApiResponse.statusText); } } }; #9 Delay 5 seconds #10 Search in Radarr Researches for movie just deleted.\nAll I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3\u0026#39; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID and get the Radarr ID const radarrApiUrl = `${radarrBaseUrl}/movie?tmdbId=${tmdbId}`; console.log(\u0026#39;Calling Radarr API to look up the movie...\u0026#39;); const radarrApiResponse = await fetch(radarrApiUrl, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (radarrApiResponse.ok) { console.log(\u0026#39;Radarr API lookup successful.\u0026#39;); const radarrApiResponseData = await radarrApiResponse.json(); if (radarrApiResponseData.length \u0026gt; 0) { const movieId = radarrApiResponseData[0].id; // Get the Radarr ID of the first movie console.log(\u0026#39;Radarr Movie ID:\u0026#39;, movieId); // Trigger Radarr to search for the movie and download const searchUrl = `${radarrBaseUrl}/command`; console.log(`Calling Radarr API to search for the movie: ${searchUrl}`); const searchRequestBody = { name: \u0026#39;MoviesSearch\u0026#39;, movieIds: [movieId], }; const searchResponse = await fetch(searchUrl, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, }, body: JSON.stringify(searchRequestBody), }); if (searchResponse.ok) { console.log(\u0026#39;Radarr movie search initiated.\u0026#39;); } else { console.error(`Failed to initiate movie search: ${searchResponse.statusText}`); } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } else { console.error(\u0026#39;Radarr API lookup failed:\u0026#39;, radarrApiResponse.statusText); } } }; #11 Delay 4 minutes This gives your download client time to download and transfer file to mapped directory. I have Gig+ internet and 99% of the time everything is done in 4 minutes.\n#12 Scan JF Libraries Using core HTTP feature, send a http POST request to https://jellyfin.domain.com/Library/Refresh with Headers X-MediaBrowser-Token and value is your Jellyfin API Key\nI only do this as Jellyfin doesn\u0026rsquo;t scan my NAS whenever I add a new file.\n#13 Add Comment/Resolve Issue This just automatically resolves the issue in Jellyseerr and adds a comment letting the user know action was taken.\nAll I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Your issue has been approved and a new version of the content has been automatically downloaded and updated in Jellyfin. Your issue has been set to Resolved. If you are still experiencing problems, re-open your issue.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, // or PUT depending on your API headers: headers, // Add any additional data required to update the status }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; We are now done with the Radarr flow. Moving onto Sonarr.\n#14 Branch Episodes and Seasons With the issue data, we also get an \u0026ldquo;extra\u0026rdquo; field which is where the requests Affected Episode Number and Affected Season Number are. What this branch does is see if there is an affected Episode Number by seeing if that field in the data exists. You will have to create an issue for a TV show and say an entire season is affected. Then use that sample data, go back to this branch and add the value\nJellyseerr Issue Reported body extra 1 as pictured #15 Add Comment/Resolve Issue This path meant the user reported an issue on an entire season and basically sends a response to them telling them to do it individually. I probably could have gotten a script working for this but I spent a few hours on it and frustratingly gave up. Maybe I will update this in the future but for now idrc.\nAgain, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Please do not report an entire season as the issue. Specify each Episode number. Please delete this issue and resubmit. Your issue has been automatically marked as Resolved.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; #16 Mark as Failed Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; // Using TVDB ID for TV shows console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); // Define your Sonarr API key and base URL const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Sonarr API key const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; // Use Sonarr\u0026#39;s API to look up the series by TVDB ID and get the Sonarr ID const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; // Find the affected season and episode numbers const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); console.log(\u0026#34;Season ID = \u0026#34; + affectedSeason); console.log(\u0026#34;Episode ID = \u0026#34; + affectedEpisode); // Get the history of the series const historyResponse = await fetch(`${sonarrBaseUrl}/history/series?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (historyResponse.ok) { const historyData = await historyResponse.json(); // Find the most recent entry that matches the affected season and episode const recentEntry = historyData.find(entry =\u0026gt; { const sourceTitleMatch = /S(\\d+)E(\\d+)/.exec(entry.sourceTitle); if (sourceTitleMatch) { const sourceSeason = parseInt(sourceTitleMatch[1]); const sourceEpisode = parseInt(sourceTitleMatch[2]); return sourceSeason === affectedSeason \u0026amp;\u0026amp; sourceEpisode === affectedEpisode; } return false; }); if (recentEntry) { const episodeId = recentEntry.episodeId; const id = recentEntry.id; // This is the ID you need for marking as failed console.log(\u0026#34;Found Episode ID = \u0026#34; + episodeId); console.log(\u0026#34;Found Most Recent Download ID = \u0026#34; + id); // Use the episode ID to mark the episode as failed const markFailedUrl = `${sonarrBaseUrl}/history/failed/${id}`; console.log(`Calling Sonarr API to mark episode as failed: ${markFailedUrl}`); const markFailedResponse = await fetch(markFailedUrl, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, body: JSON.stringify({ status: \u0026#39;failed\u0026#39; }), }); if (markFailedResponse.ok) { console.log(\u0026#39;Episode successfully marked as failed in Sonarr.\u0026#39;); } else { console.error(`Failed to mark episode as failed in Sonarr: ${markFailedResponse.statusText}`); } } else { console.error(\u0026#39;No matching entry found in the series history for the affected episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch series history:\u0026#39;, historyResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; You may have to play around a bit and see if when you run this it auto searches for the file. My Sonarr does but my Radarr doesn\u0026rsquo;t, couldnt find any setting. Regardless I include a search command and even if Sonarr searches 2 times it appears 1 will cancel out. This is why no time delay between this code and file deletion.\n#17 Delete File Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); const episodeFilesResponse = await fetch(`${sonarrBaseUrl}/episodefile?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (episodeFilesResponse.ok) { const episodeFilesData = await episodeFilesResponse.json(); const targetEpisode = episodeFilesData.find(episode =\u0026gt; { const parsedPath = episode.relativePath.match(/S(\\d+)E(\\d+)/); if (parsedPath) { const episodeSeason = parseInt(parsedPath[1]); const episodeNumber = parseInt(parsedPath[2]); return episodeSeason === affectedSeason \u0026amp;\u0026amp; episodeNumber === affectedEpisode; } return false; }); if (targetEpisode) { const targetEpisodeId = targetEpisode.id; console.log(\u0026#34;Found Episode ID = \u0026#34; + targetEpisodeId); // Delete the target episode file const deleteEpisodeUrl = `${sonarrBaseUrl}/episodefile/${targetEpisodeId}`; const deleteEpisodeResponse = await fetch(deleteEpisodeUrl, { method: \u0026#39;DELETE\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (deleteEpisodeResponse.ok) { console.log(\u0026#39;Episode file successfully deleted in Sonarr.\u0026#39;); } else { console.error(`Failed to delete episode file in Sonarr: ${deleteEpisodeResponse.statusText}`); } } else { console.error(\u0026#39;No matching episode found in the episode files for the affected season and episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch episode files:\u0026#39;, episodeFilesResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; #18 Re-search in Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); const historyResponse = await fetch(`${sonarrBaseUrl}/history/series?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (historyResponse.ok) { const historyData = await historyResponse.json(); const recentEntry = historyData.find(entry =\u0026gt; { const sourceTitleMatch = /S(\\d+)E(\\d+)/.exec(entry.sourceTitle); if (sourceTitleMatch) { const sourceSeason = parseInt(sourceTitleMatch[1]); const sourceEpisode = parseInt(sourceTitleMatch[2]); return sourceSeason === affectedSeason \u0026amp;\u0026amp; sourceEpisode === affectedEpisode; } return false; }); if (recentEntry) { const episodeId = recentEntry.episodeId; console.log(\u0026#34;Found Episode ID = \u0026#34; + episodeId); // Perform the episode search const searchPayload = { name: \u0026#39;EpisodeSearch\u0026#39;, episodeIds: [episodeId], }; const searchResponse = await fetch(`${sonarrBaseUrl}/command`, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, }, body: JSON.stringify(searchPayload), }); if (searchResponse.ok) { console.log(\u0026#39;Episode search command successfully sent to Sonarr.\u0026#39;); } else { console.error(`Failed to send episode search command to Sonarr: ${searchResponse.statusText}`); } } else { console.error(\u0026#39;No matching entry found in the series history for the affected episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch series history:\u0026#39;, historyResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; #19 Delay for 4 Minutes Waiting for media to download and transfer.\n#20 Add Comment/Resolve Issue Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Your issue has been approved and a new version of the content has been automatically downloaded and updated in Jellyfin. Your issue has been set to Resolved. If you are still experiencing problems, re-open your issue.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; #21 Same as #12 Conclusion Once all this is done you can publish the flow and try it out!\nIf you have any feedback you can DM on Reddit. I\u0026rsquo;d love to see how you have edited this automation to your exact needs.\nNow the hard part, getting your users to actually report the issues in Jellyseerr and not reach out to you!\n","permalink":"https://mafyuh.com/posts/how-to-automate-jellyfin-issue-handling/","summary":"I wanted a way to automate when users tell me a video on my Jellyfin server has an issue. After alot of trial and error, ChatGPT, Bard and I came up with this automation.\nRequirements My only requirements when making this was that it was free and self-hostable. Not even any NPM extensions are required in AP. Actual Software requirements are:\nSonarr Radarr Overseerr/Jellyseerr Optional\nSMTP server or ability to send SMTP messages (can also use discord) ActivePieces or any other automation platform that supports TS.","title":"How To Automate Jellyfin Issue Handling"},{"content":"authentik\u0026rsquo;s docs have a guide already for Guacamole. You can find that here. Follow all the instructions there, (especially the part where you create a user in Guacamole with the USERNAME of your email. not just filling in the email), but if you are using Cloudflare as our DNS you may run into problems. Such as infinite redirect loop.\nError 403 Forbidden While it was looping, I checked my Guacamole docker container logs in Portainer, and found the 403 Forbidden error.\n22:03:59.418 [http-nio-8080-exec-2] INFO o.a.g.a.o.t.TokenValidationService - Rejected invalid OpenID token: JWT processing failed. Additional details: [[17] Unable to process JOSE object (cause: org.jose4j.lang.UnresolvableKeyException: Unable to find a suitable verification key for JWS w/ header {\u0026#34;alg\u0026#34;:\u0026#34;RS256\u0026#34;,\u0026#34;kid\u0026#34;:\u0026#34;xxx\u0026#34;,\u0026#34;typ\u0026#34;:\u0026#34;JWT\u0026#34;} due to an unexpected exception (java.io.IOException: Non 200 status code (403 Forbidden) returned from https://example.com/application/o/guacamole/jwks/?exclude_x5) while obtaining or using keys from JWKS endpoint at https://example.com/application/o/guacamole/jwks/?exclude_x5): JsonWebSignature{\u0026#34;alg\u0026#34;:\u0026#34;RS256\u0026#34;,\u0026#34;kid\u0026#34;:\u0026#34;xxx\u0026#34;,\u0026#34;typ\u0026#34;:\u0026#34;JWT\u0026#34;} I assumed it had something to do with my Nginx Proxy Manager and the way I was proxying Guacamole, I do have WebSocket support and block common exploits enabled, their docs give a nginx config that I added under advanced.\nlocation /guacamole/ { proxy_pass http://HOSTNAME:8080; proxy_buffering off; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; access_log off; } I messed around with settings individually for hours, reading their docs, tried oznu\u0026rsquo;s Guacamole image as well, this time with errors about the postgres version being incompatible. Figured it could be something with Cloudflare so turned down my HTTPS settings. Nada. Tried SAML, more errors. Finally found this github issue and thanks to Fma965 for finding the solution.\nGo to your Cloudflare Dashboard. Click on your domains summary and then on the left tab find Rules.\nUnder Page Rules - Create a New Page Rule, set the URL as your jwks URL from authentik\u0026rsquo;s provider summary. Under pick a setting, choose Browser Integrity Check and make sure its unchecked. Save.\nThis finally got me authenticated into my Guacamole instance via authentik. I spent way too much time on this integration. Anyways, hope this guide helps someone who may be in my shoes.\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-guacamole-authentik-nginxproxymanager/","summary":"authentik\u0026rsquo;s docs have a guide already for Guacamole. You can find that here. Follow all the instructions there, (especially the part where you create a user in Guacamole with the USERNAME of your email. not just filling in the email), but if you are using Cloudflare as our DNS you may run into problems. Such as infinite redirect loop.\nError 403 Forbidden While it was looping, I checked my Guacamole docker container logs in Portainer, and found the 403 Forbidden error.","title":"How to authenticate Guacamole via authentik with Cloudflare and Nginx Proxy Manager"},{"content":"If you are getting error messages like:\n422: the change you wanted was rejected. message from saml: actioncontroller::invalidauthenticitytoken Just make sure you set these in your Nginx Proxy Manager hosts Advanced field:\nlocation / { proxy_pass http://zammad:8080; # Replace proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Host $host; } I spent way too long trying to figure this out, reading through Github issues, breaking my SAML provider and Zammad configs, starting over, when the whole time it was just good old nginx header issues.\nHope this helps someone out. Fix was found on this rails github issue.\n(https://github.com/rails/rails/issues/22965)\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-zammad-via-saml-with-nginx-proxy-manager/","summary":"If you are getting error messages like:\n422: the change you wanted was rejected. message from saml: actioncontroller::invalidauthenticitytoken Just make sure you set these in your Nginx Proxy Manager hosts Advanced field:\nlocation / { proxy_pass http://zammad:8080; # Replace proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Host $host; } I spent way too long trying to figure this out, reading through Github issues, breaking my SAML provider and Zammad configs, starting over, when the whole time it was just good old nginx header issues.","title":"How to authenticate Zammad via SAML with Nginx Proxy Manager"},{"content":"You could do this with OpenID as well but this method is using SAML. This guide assumes you already have running instances of Kasm Workspaces and authentik.\nThe official authentik docs dont have a Kasm Integration listed at the time. So I thought I would help out anyone who is trying to integrate these services via SAML. authentik\u0026rsquo;s SAML docs can be found here.\nSetting up Kasm In the Kasm Workspaces admin, click Access Management - Authentication - SAML and create a new configuration. Make sure you enable and make default after testing. You will probably find yourself switching between tabs alot, its best to start creating them both at the same time as you need links from each.\nDisplay Name: authentik Logo URL: https://auth.example.com/static/dist/assets/icons/icon.svg (or custom logo) Host Name: authentik NameID Attribute: emailAddress Entity ID: authentik Single Sign On Service/SAML 2.0 Endpoint: https://auth.example.com/application/saml/kasm/sso/binding/redirect/ X509 Certificate: Skip to authentik setup first, then come back here. In authentik admin, go to your newly created SAML provider, when you click the provider and are brought to its details, you should have the option to Download signing certificate. Download it and paste the files contents here. Setting up authentik In the authentik admin, under Applications, create a new SAML provider. Once you created a provider, create an Application and set its provider to the newly created kasm provider. For simplicity sake, the provider and application name is kasm. (kasms pictured)\nAuthorization flow: implicit ACS URL: https://kasm.example.com/api/acs/?id=e977b6cf72c7424328275db5f48fd15ab (Single Sign-On Service from kasm photo) Issuer: authentik (must be the same as Entity ID chosen in Kasm) Service Binding Provider: Post Audience: https://kasm.example.com/api/metadata/?id=e977b6cf72c7424328275db5f48fd15ab ( Entity ID URL from Kasm photo) Under Advanced, choose a signing certificate, default is authentik. Go back to Kasm x509 Certificate. Make sure you save you changes. You should now be able to test SAML at the bottom of the page, once tested, I recommend opening a incognito tab and loading your KASM website.\nYou should now be able to authenticate yourself using SAML via authentik. I recommend going back to your admin session and adding your newly created user to the admin group. Also if it should only be you accessing this via authentik, I would change the kasm Application in authentik and bind it to your user.\nThank you to u/agent-squirrel and this subreddit post on helping me with the NameID Attribute part!\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-kasm-via-authentik/","summary":"You could do this with OpenID as well but this method is using SAML. This guide assumes you already have running instances of Kasm Workspaces and authentik.\nThe official authentik docs dont have a Kasm Integration listed at the time. So I thought I would help out anyone who is trying to integrate these services via SAML. authentik\u0026rsquo;s SAML docs can be found here.\nSetting up Kasm In the Kasm Workspaces admin, click Access Management - Authentication - SAML and create a new configuration.","title":"How To Authenticate KASM via authentik"},{"content":"To \u0026lsquo;Show more options\u0026rsquo; by default in File Explorer, open Command Prompt as Administrator, then type or paste the following command:\nreg add HKCU\\Software\\Classes\\CLSID\\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\\InprocServer32 /ve /d \u0026#34;\u0026#34; /f and hit Enter.\n","permalink":"https://mafyuh.com/posts/how-to-show-more-options-by-default-in-windows-11/","summary":"To \u0026lsquo;Show more options\u0026rsquo; by default in File Explorer, open Command Prompt as Administrator, then type or paste the following command:\nreg add HKCU\\Software\\Classes\\CLSID\\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\\InprocServer32 /ve /d \u0026#34;\u0026#34; /f and hit Enter.","title":"How to Show More Options By Default in Windows 11"},{"content":"This is just a visual representations of how my current setup flows.\nI have some of the docker-compose files that make up this infra on my Gitea\n","permalink":"https://mafyuh.com/posts/network-traffic-map/","summary":"This is just a visual representations of how my current setup flows.\nI have some of the docker-compose files that make up this infra on my Gitea","title":"Network Traffic Map"},{"content":"Just a straight forward list of pretty much everything that makes up my homelab. Or systems I\u0026rsquo;ve ran in the past.\nOperating Systems\nUbuntu 23.04 Ubuntu 22.04 (primary on most systems) CentOS/Fedora 38 (only when Ubuntu doesnt play nice) Debian 11 Proxmox 8 Windows 10/11 TrueNAS Scale (virtualized) CasaOS (zimaboard) pfSense Applications/Containers\nNginx Proxy Manager Nginx Apache2 Traefik Authentik Portainer Yacht AdGuardHome Pihole Wazuh Zabbix Uptime Kuma Ghost (this blog) Wordpress Hydroxide (proton mail bridge) Calibre Smokeping Openspeedtest Grafana Prometheus InfluxDB PostgresSQL MySQL Watchtower Apache Guacamole Ansible Terraform Packer Vaultwarden Kasm Workspaces Jellyfin Plex Twingate Tailscale Headscale Wireguard LinkStack N8N Gotify Nextcloud Immich AI\nGPT4ALL Stable Diffusion LocalAI Auto-GPT Comfy UI Arr Suite\nRadarr Sonarr Prowlarr Lidarr Jellyseer Tdarr Requesterr Real Debrid Client Wizarr ","permalink":"https://mafyuh.com/posts/software/","summary":"Just a straight forward list of pretty much everything that makes up my homelab. Or systems I\u0026rsquo;ve ran in the past.\nOperating Systems\nUbuntu 23.04 Ubuntu 22.04 (primary on most systems) CentOS/Fedora 38 (only when Ubuntu doesnt play nice) Debian 11 Proxmox 8 Windows 10/11 TrueNAS Scale (virtualized) CasaOS (zimaboard) pfSense Applications/Containers\nNginx Proxy Manager Nginx Apache2 Traefik Authentik Portainer Yacht AdGuardHome Pihole Wazuh Zabbix Uptime Kuma Ghost (this blog) Wordpress Hydroxide (proton mail bridge) Calibre Smokeping Openspeedtest Grafana Prometheus InfluxDB PostgresSQL MySQL Watchtower Apache Guacamole Ansible Terraform Packer Vaultwarden Kasm Workspaces Jellyfin Plex Twingate Tailscale Headscale Wireguard LinkStack N8N Gotify Nextcloud Immich AI","title":"Software"},{"content":"Most of my infrastructure is hosted on my in-lab Proxmox server, along with a few new machines for dedicated services. Here are some of the specs of some of the in-lab machines.\nProxmox Server CPU: Intel Core i7-9700K GPU: Nvidia GeForce GTX 1660 6GB RAM: 64GB DDR4 3000Mhz NVME SSD\u0026rsquo;s for storage 4x 4TB HDD\u0026rsquo;s (passthrough to NAS) Gaming PC CPU: Intel Core i7-13700K GPU: Nvidia GeForce RTX 3080 RAM: 64GB DDR5 6000 Mhz SSD: Samsung 980 Pro 2TB Mobo: MPG Z790 EDGE WIFI Windows 11 Pro Main PC used for everything. I just remote into every other machine. Yes, it is on top of my mini-fridge. Yes, my cable management is terrible.\nNetworking ISP: Xfinity. Coax currently getting 2.0Gbps download and 80mbps upload. (my monitoring in lab averages 2.21Gbps down and 76mbps up) Router: pfSense Box AP\u0026rsquo;s: TP-Link Deco XE75 PRO (x3) WIFI 6E Mesh Switch: TRENDnet 6-port 10G ","permalink":"https://mafyuh.com/posts/hardware/","summary":"Most of my infrastructure is hosted on my in-lab Proxmox server, along with a few new machines for dedicated services. Here are some of the specs of some of the in-lab machines.\nProxmox Server CPU: Intel Core i7-9700K GPU: Nvidia GeForce GTX 1660 6GB RAM: 64GB DDR4 3000Mhz NVME SSD\u0026rsquo;s for storage 4x 4TB HDD\u0026rsquo;s (passthrough to NAS) Gaming PC CPU: Intel Core i7-13700K GPU: Nvidia GeForce RTX 3080 RAM: 64GB DDR5 6000 Mhz SSD: Samsung 980 Pro 2TB Mobo: MPG Z790 EDGE WIFI Windows 11 Pro Main PC used for everything.","title":"Hardware"}] \ No newline at end of file +[{"content":"I wanted to create an SPL token and after looking online I couldn\u0026rsquo;t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain. Everything is done from the CLI.\nThis guide just covers the basics, the tools used are way more powerful than what I use them for, this is just creating a basic token with no taxes or locked supply or anything complex, but these tools do support those options. If you are interested in doing more I would read the proper documentation.\nhttps://docs.solanalabs.com/cli/install https://metaboss.rs/overview.html https://spl.solana.com/token NetworkChuck has a video from late 2021 on doing this, but some commands are a bit outdated, and Solana updated their entire metadata process in 2022.\nI am using an Ubuntu 22.04 VM with 60GB storage to run these commands.\nStarting balance: 0.079975 SOL Ending balance: 0.05731652 SOL Total SOL cost: 0.02265848 SOL ($4.22 on 3/15/2024) Installing Solana Tools First we need to download Solana tools to our system:\nsh -c \u0026#34;$(curl -sSfL https://release.solana.com/stable/install)\u0026#34; then run the export path command that is given to you:\nexport PATH=\u0026#34;/home/mafyuh/.local/share/solana/install/active_release/bin:$PATH\u0026#34; Restart your terminal session.\nCreating Wallet We will create a new SOL wallet to fund our token. To do this run:\nsolana-keygen new --derivation-path \u0026#34;m/44\u0026#39;/501\u0026#39;/0\u0026#39;/0\u0026#39;\u0026#34; --force --no-bip39-passphrase Credit to u/nel0_angel0 on finding the \u0026ndash;derivation-path flag\nI would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time. It\u0026rsquo;s best to restore this private key in your wallet on PC/phone. (Phantom, Solflare, etc)\nKeep note of the keypair directory for later step.\nCheck your SOL balance with:\nsolana balance Install Rust We need Rust in order to create the token, to install Rust run:\ncurl --proto \u0026#39;=https\u0026#39; --tlsv1.2 -sSf https://sh.rustup.rs | sh Press enter for default installation. Once completed, restart your session again.\nThen we need to install some needed packages:\nsudo apt install libudev-dev llvm libclang-dev libssl-dev pkg-config build-essential protobuf-compiler -y Install spl-token-cli Now using Rust we are gonna install Solana\u0026rsquo;s CLI tools, this will take a few minutes.\ncargo install spl-token-cli Create Token Creating a new token is simple, make sure your wallet is funded with SOL and just run:\nspl-token create-token Your token\u0026rsquo;s address will be printed on screen. You will use this address in pretty much all the rest of the steps so keep handy.\nNote this creates a 9 decimal token, with no extensions, if you want to change this and add complexity to the token check out this\nIf you want to create a token with different that 9 decimals use:\nspl-token create-token --decimals \u0026lt;# of decimals\u0026gt; For a list of all things you can do run:\nspl-token create-token --help Now we need to create a token account for this token:\nspl-token create-account \u0026lt;TOKEN_ADDRESS\u0026gt; Example:\nspl-token create-account 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr If you get errors like:\n\u0026ldquo;unable to confirm transaction. This can happen in situations such as transaction expiration and insufficient fee-payer funds\u0026rdquo;\nYou just need to retry a few times, it will eventually go thru but sometimes takes 3-4 runs.\nMinting Tokens Now that you have a token and an account for the token, you can actually mint some tokens. To do this run:\nspl-token mint \u0026lt;TOKEN_ADDRESS\u0026gt; \u0026lt;# of tokens\u0026gt; \u0026lt;ACCOUNT_ADDRESS\u0026gt; Example:\nspl-token mint 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 CkaGbdriXVMHtzFBPtnpDjQvZ9gM9bwd8XdTTKR2Wx32 To see your tokens you can run:\nspl-token accounts Now if you want to send these tokens to a new address, just run:\nspl-token transfer --fund-recipient --allow-unfunded-recipient \u0026lt;TOKEN_ADDRESS\u0026gt; \u0026lt;# of tokens\u0026gt; \u0026lt;NEW_ADDRESS\u0026gt; Example:\nspl-token transfer --fund-recipient --allow-unfunded-recipient 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 2DDyEt5N4y77ETWhhUmkZiympQbpjkfrt8FcMKhB1iWU This won\u0026rsquo;t be needed if you restored your private key in your wallet.\nInstalling Metaboss Once this completes you can install metaboss which is needed to upload metadata. You can try to use spl-token built in metadata uploader as well, using \u0026ndash;enable-metadata and initialize-metadata during token creation, but I couldn\u0026rsquo;t get this to work. Metaboss worked 1st try, again, this takes some time:\ncargo install metaboss Arweave/Github While we wait on metaboss to install, we should start uploading our tokens Logo to a cloud provider, I use Arweave in this example but you can use anything really. There are also many ways to upload to arweave so this is just a friendly example thats free.\nFirst create an account at https://akord.com/use-arweave Upload your image to a new vault. (PNG) Click on the information icon next to your image and copy the arweave.net URL. (Not under Share) We need this for our JSON file we will create next.\nNow you can create a json file, and in it paste the following:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Description of token\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;https://arweave.net/image-url-from-above\u0026#34; } If you want metadata extensions use:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Small description of your token.\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;https://arweave.net/image-url-from-above\u0026#34;, \u0026#34;extensions\u0026#34;: { \u0026#34;website\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;twitter\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;telegram\u0026#34;: \u0026#34;\u0026#34; } } Now save this file with .json extension and upload it to Arweave just like the image. Now we need this JSON file\u0026rsquo;s Arweave link. Copy it from akord and create a new json file in your Solana server\u0026rsquo;s working directory. Fill in the following:\n{ \u0026#34;name\u0026#34;: \u0026#34;TOKEN_NAME\u0026#34;, \u0026#34;symbol\u0026#34;: \u0026#34;SYM\u0026#34;, \u0026#34;uri\u0026#34;: \u0026#34;https://arweave.net/json-file-arweave-url\u0026#34; } Using the JSON file\u0026rsquo;s Arweave link as the URI. Name this file metadata.json.\nIf you are using Github, just make a new repo, upload the json file and image, copy the RAW url. URL should look like https://raw.githubusercontent.com/xxxxxxx. Probably easier to use Github tbh, especially if you don\u0026rsquo;t even know what Arweave is.\nCreating Metadata First we need to update our RPC URL, to set to mainnet run:\nsolana config set --url https://api.mainnet-beta.solana.com --keypair /home/mafyuh/.config/solana/id.json Filling in your keypair directory from earlier.\nNow that metaboss is installed, we just need to run 1 command to create our tokens metadata, again it may take a few tries:\nmetaboss create metadata -a \u0026lt;TOKEN_ADDRESS\u0026gt; -m metadata.json You should be able to go to solscan and see your updated metadata! It should appear in the SOL wallets soon after.\nUpdating Metadata If you ever need to update your metadata, you can do so by running:\nmetaboss update uri --keypair /home/mafyuh/.config/solana/id.json --account \u0026lt;TOKEN_ADDRESS\u0026gt; --new-uri https://arweave.net/new-arweave-json-url or you can just edit your existing json file.\nBONUS Creating a Market Now that you have a coin ready to go, you probably wanna get it listed so others can buy, I\u0026rsquo;ll try to make this process as cheap and easy as possible. Thanks to this Reddit post for finding these values.\nYou need to connect your wallet and have the tokens in the wallet that is connected for this to work, so either restore your private key or send tokens to your wallet on PC.\nNote I would not create this small of a market for a production coin, as what you are paying for when creating a market is essentially space on the blockchain for all your transactions. Long term projects should certainly not pay this little for a market, probably only good for smaller meme coins. If you are planning a long-term project you should probably be paying a few SOL for your market fee.\nRaydium has some good docs on how to create a market and pool, I would review these docs as well.\nFirst go to https://openbook-explorer.xyz/market/create Click Existing under mints Base Mint: Your token address Quote Mint: So11111111111111111111111111111111111111112 (this is swapping for SOL) Under Mints , since by default our token was 9 decimals, we will set these values Min Order size: 0.1 Price Tick: 0.99999998 or 0.99999999 Under advanced options check use advanced options. (this is what we are paying for, if long-term pay the 2.78 SOL) Event Queue Length: 128 Request Queue Length: 63 Orderbook Length: 201 At this time the cost to create this market is 0.32 SOL. Keep note of the market address.\nBONUS Creating Pool Now that we have a market, we need to create a pool. I\u0026rsquo;ve found Raydium to be the cheapest fee, but I would not cheap out on how much SOL you delegate to the pool as this is gonna be your liquidity, and having almost no liquidity is gonna be big red flag. But I have in the past just delegated .1 SOL and it worked, but trust me this is not gonna work out well.\nFirst go to https://raydium.io/liquidity/create/ Connect Wallet Paste Market ID Under Price and initial liquidity What we are doing here is setting our tokens starting price, the amount of tokens you put in the pool at the start decides how much they\u0026rsquo;re worth compared to SOL. All your tokenomics and things like this should probably already be done at this point, unless you\u0026rsquo;re just YOLO\u0026rsquo;ing it like I did. This is by far the most costly part of the process. Set a certain start time if you want. Hit Initialize Liquidity Pool and confirm in your wallet. The total fee currently is .68 SOL to create this pool.\nYou will recieve all the LP tokens in your wallet.\nBurning LP/ Revoke Authority You will probably want to burn these LP tokens so buyers won\u0026rsquo;t be scared off. There are many ways to do this, you can use the cli using this command:\nspl-token burn \u0026lt;TOKEN-ACCOUNT-ADDRESS\u0026gt; \u0026lt;AMOUNT\u0026gt; You can get the address on Solscan. Some wallets like solflare allow you to burn tokens thru the wallet. Or you can use online services like https://sol-incinerator.com/\nYou will also want to revoke mint authority as well as freeze authority by running:\nspl-token authorize \u0026lt;token_address\u0026gt; freeze --disable And for mint authority:\nspl-token authorize \u0026lt;token_address\u0026gt; mint --disable If you want to get your price to show on the wallets, you need to get listed on CoinGecko. There\u0026rsquo;s a bunch of requirements, to apply here is a link.\nTo get listed on Jupiter, they will automatically list your token once it hits some benchmarks which can be found here\nNow you just need to start your social media campaigns and best of luck! You can send your boy some of your tokens as thanks @ 3RYPrKxC6BNv3XUMf8Cyjg36pw6Qu1txRvqq6LNq9Psj\nTotal in Fees: 1 SOL (plus your liquidity)\nHope this guide has helped you save some $ when creating your Solana tokens!\n","permalink":"https://mafyuh.com/posts/spl-token-cli/","summary":"I wanted to create an SPL token and after looking online I couldn\u0026rsquo;t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain.","title":"How to create a Solana Token (SPL) from CLI with metadata"},{"content":"This guide is for someone who is looking to setup an Arr Stack for media organization and downloading. This guide requires no remote path mappings, follows Trash-Guides recommendations and every command needed is copy-pasteable. The VM\u0026rsquo;s in this guide are hosted on Proxmox 8.1.4, but you can use any Ubuntu environment (WSL-2, VirtualBox, etc.)\nArr VM Specs:\n2 core host 8GB RAM 100GB Storage Downloader VM Specs:\n2 core host 4GB RAM 250GB Storage (can download up to this limit at a time, be careful when mass downloading or give plenty of space) Prerequisites Ubuntu 22.04 Any Usenet Server Subscription (preferred) Any Usenet Indexer Subscription (preferred) Real-Debrid Subscription (if you like torrents being fast) VPN Subscription (Bare minimum needed to download torrents) Folder Structure Setup Run this command to make all folders, following TRASH-guides recommended naming scheme:\nsudo mkdir -p /data/torrents/{books,movies,music,tv} /data/usenet/{incomplete,complete/{books,movies,music,tv}} /data/media/{books,movies,music,tv} Mounting NAS I use my NAS for storing all my content, this allows me to have 1 spot to have everything saved too, and not getting tripped up with different file systems. You do not need a NAS, and can just skip this part of guide and use the local filesystem. I use TrueNAS Scale with SMB. In order to mount SMB shares to Linux filesystem we need to install CIFS:\nsudo apt install cifs-utils -y then we need to tell the system which directory to map it to, to do this:\nsudo nano /etc/fstab at the end of the file, add an entry for your NAS as such:\n//\u0026lt;NAS IP\u0026gt;/\u0026lt;NAS Share\u0026gt; /data/media cifs username=\u0026lt;user\u0026gt;,password=\u0026lt;pass\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 be sure to replace your credentials.\nTo mount your NAS, you can run:\nsudo mount -a then run the following to make sure your NAS is mounted:\nls /data/media Everything in your NAS should be showing now, but we need to set permissions, to do that run:\nsudo chown -R $USER:$USER /data sudo chmod -R a=,a+rX,u+w,g+w /data Install Docker Now we have to install Docker, I use this command to install Docker and Docker Engine:\ncurl -fsSL https://get.docker.com | sudo sh Now that docker is installed, we can add our user to the docker group so we dont have to use sudo every command:\nsudo usermod -aG docker $USER Now I would make a docker directory to store all your appdata, you can use your home directory if you want, but trash-guides recommend not doing so:\nsudo mkdir -p /docker/appdata/{radarr,sonarr,bazarr,prowlarr,lidarr,sabnzbd,qbitty,rdt} Then set permissions on the docker directory:\nsudo chown -R $USER:$USER /docker sudo chmod -R a=,a+rX,u+w,g+w /docker 2 VM Setup I have my downloaders (Sab, Qbitty, Rdt-client) on a different VM than my ARR\u0026rsquo;s, this is cause when I had everything on 1 docker host, I would have constant HTTP errors from Sab mainly, and as Sab is where I get most of my media, I decided to move to another VM, and then SMB share the download directories over to my ARR\u0026rsquo;s VM.\nYou do not have to do this, you can just have 1 docker host, up to you. It is alot less work to do all in one 1 VM.\nIf you do this, you need to replicate the origin setup, making all the same directories, then run:\nsudo apt update sudo apt install samba We need to configure Samba to tell it what we are sharing:\nsudo nano /etc/samba/smb.conf Add the following at the end of this file:\n[usenet] path = /data/usenet read only = no guest ok = no create mask = 0755 [torrents] path = /data/torrents read only = no guest ok = no create mask = 0755 To create your username and password, replace your_username with your actual username:\nsudo smbpasswd -a your_username Then restart samba with:\nsudo systemctl restart smbd Go back to your Arr VM and add the following to your /etc/fstab:\n//\u0026lt;nas-ip\u0026gt;/usenet /data/usenet cifs username=\u0026lt;username\u0026gt;,password=\u0026lt;password\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 //\u0026lt;nas-ip\u0026gt;/torrents /data/torrents cifs username=\u0026lt;username\u0026gt;,password=\u0026lt;password\u0026gt;,uid=1000,gid=1000,auto,nofail 0 0 Mount them with:\nsudo mount -a Then re-run our permissions command:\nsudo chown -R $USER:$USER /data sudo chmod -R a=,a+rX,u+w,g+w /data I would reboot this VM at this point, this will make sure it auto connects to our SMB shares at boot.\nDocker Compose Files Now that everything is setup, we can actually install the services:\nOne VM This is a full docker compose file for pretty much all major Arr\u0026rsquo;s and downloaders I use. I threw Lidarr in here as well, as I run Lidarr for music, but if you dont care about music you can remove lidarr:\nversion: \u0026#34;3.9\u0026#34; services: sabnzbd: image: lscr.io/linuxserver/sabnzbd:latest container_name: sabnzbd environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sabnzbd:/config - /data/usenet:/data/usenet:rw ports: - 8080:8080 restart: unless-stopped arch-qbittorrentvpn: image: binhex/arch-qbittorrentvpn:latest container_name: qbittorrentvpn volumes: - \u0026#39;/docker/appdata/qbitty:/config\u0026#39; - \u0026#39;/data/torrents/:/data/torrents\u0026#39; - \u0026#39;/etc/localtime:/etc/localtime:ro\u0026#39; ports: - \u0026#39;49550:49550\u0026#39; - \u0026#39;49551:8118\u0026#39; environment: - VPN_ENABLED=yes - VPN_PROV=protonvpn - VPN_CLIENT=wireguard - VPN_USER=username+pmp - VPN_PASS= - STRICT_PORT_FORWARD=yes - LAN_NETWORK=10.0.0.0/24 - ENABLE_PRIVOXY=yes - PUID=1000 - PGID=1000 - WEBUI_PORT=49550 - UMASK=1000 - DEBUG=false cap_add: - NET_ADMIN sysctls: - net.ipv4.conf.all.src_valid_mark=1 privileged: true network_mode: bridge restart: unless-stopped rdtclient: container_name: rdtclient volumes: - \u0026#39;/data/torrents:/data/torrents\u0026#39; - \u0026#39;/docker/appdata/rdt:/data/db\u0026#39; image: rogerfar/rdtclient restart: always logging: driver: json-file options: max-size: 10m ports: - \u0026#39;6500:6500\u0026#39; bazarr: image: lscr.io/linuxserver/bazarr:latest ports: - \u0026#34;6767:6767\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/bazarr:/config - /data/media:/data/media restart: unless-stopped environment: - PUID=1000 - PGID=1000 lidarr: image: lscr.io/linuxserver/lidarr:latest ports: - \u0026#34;8686:8686\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/lidarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 prowlarr: image: lscr.io/linuxserver/prowlarr:latest ports: - \u0026#34;9696:9696\u0026#34; volumes: - /docker/appdata/prowlarr:/config restart: unless-stopped environment: - PUID=1000 - PGID=1000 radarr: image: lscr.io/linuxserver/radarr:latest ports: - \u0026#34;7878:7878\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/radarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 sonarr: image: lscr.io/linuxserver/sonarr:latest ports: - \u0026#34;8989:8989\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sonarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 networks: default: name: arrs_default 2 VM Arrs:\nversion: \u0026#34;3.7\u0026#34; services: bazarr: image: lscr.io/linuxserver/bazarr:latest ports: - \u0026#34;6767:6767\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/bazarr:/config - /data/media:/data/media restart: unless-stopped environment: - PUID=1000 - PGID=1000 lidarr: image: lscr.io/linuxserver/lidarr:latest ports: - \u0026#34;8686:8686\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/lidarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 prowlarr: image: lscr.io/linuxserver/prowlarr:latest ports: - \u0026#34;9696:9696\u0026#34; volumes: - /docker/appdata/prowlarr:/config restart: unless-stopped environment: - PUID=1000 - PGID=1000 radarr: image: lscr.io/linuxserver/radarr:latest ports: - \u0026#34;7878:7878\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/radarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 sonarr: image: lscr.io/linuxserver/sonarr:latest ports: - \u0026#34;8989:8989\u0026#34; volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sonarr:/config - /data:/data restart: unless-stopped environment: - PUID=1000 - PGID=1000 networks: default: name: arrs_default Downloaders: As stated previously, Sab downloads most of my content (95%), you do not need all 3 of these, you can just copy the Sab part and just use Usenet with Sab. But I like to have a variety.\nversion: \u0026#39;3.9\u0026#39; services: sabnzbd: image: lscr.io/linuxserver/sabnzbd:latest container_name: sabnzbd environment: - PUID=1000 - PGID=1000 - TZ=Etc/UTC volumes: - /etc/localtime:/etc/localtime:ro - /docker/appdata/sabnzbd:/config - /data/usenet:/data/usenet:rw ports: - 8080:8080 restart: unless-stopped arch-qbittorrentvpn: image: binhex/arch-qbittorrentvpn:latest container_name: qbittorrentvpn volumes: - \u0026#39;/docker/appdata/qbitty:/config\u0026#39; - \u0026#39;/data/torrents/:/data/torrents\u0026#39; - \u0026#39;/etc/localtime:/etc/localtime:ro\u0026#39; ports: - \u0026#39;49550:49550\u0026#39; - \u0026#39;49551:8118\u0026#39; environment: - VPN_ENABLED=yes - VPN_PROV=protonvpn - VPN_CLIENT=wireguard - VPN_USER=username+pmp - VPN_PASS= - STRICT_PORT_FORWARD=yes - LAN_NETWORK=10.0.0.0/24 - ENABLE_PRIVOXY=yes - PUID=1000 - PGID=1000 - WEBUI_PORT=49550 - UMASK=1000 - DEBUG=false cap_add: - NET_ADMIN sysctls: - net.ipv4.conf.all.src_valid_mark=1 privileged: true network_mode: bridge restart: unless-stopped rdtclient: container_name: rdtclient volumes: - \u0026#39;/data/torrents:/data/torrents\u0026#39; - \u0026#39;/docker/appdata/rdt:/data/db\u0026#39; image: rogerfar/rdtclient restart: always logging: driver: json-file options: max-size: 10m ports: - \u0026#39;6500:6500\u0026#39; Running Docker Compose Files In order to run these files, it depends on which option you chose, if 1 VM setup, just copy the compose file and create a new docker-compose.yml file with:\nnano docker-compose.yml Paste in the content, CTRL + X to exit nano, Y to save, ENTER to keep filename. Then run:\ndocker compose up -d If you are using 2 VM\u0026rsquo;s, you need to do this 2x. One for each docker-compose file.\nConclusion Congratulations on setting up your media library backend! We now have to go and configure all these services to work together, which I have another full blog post on which you can find here.\n","permalink":"https://mafyuh.com/posts/docker-arr-stack-guide/","summary":"This guide is for someone who is looking to setup an Arr Stack for media organization and downloading. This guide requires no remote path mappings, follows Trash-Guides recommendations and every command needed is copy-pasteable. The VM\u0026rsquo;s in this guide are hosted on Proxmox 8.1.4, but you can use any Ubuntu environment (WSL-2, VirtualBox, etc.)\nArr VM Specs:\n2 core host 8GB RAM 100GB Storage Downloader VM Specs:\n2 core host 4GB RAM 250GB Storage (can download up to this limit at a time, be careful when mass downloading or give plenty of space) Prerequisites Ubuntu 22.","title":"Docker Compose Arr Stack Guide"},{"content":"Hello! 👋 I\u0026rsquo;m Matt Reeves, a DevOps and GitOps enthusiast with a passion for self-hosting.\nBefore diving into the world of DevOps and GitOps, I honed my skills as an advanced electronics repair technician, tackling complex challenges with multimeters, oscilloscopes, and soldering irons. From troubleshooting intricate circuits to mastering surface-mount technology (SMT), I thrived on solving problems and learning what\u0026rsquo;s possible in electronics.\nWhile I continue to stay up-to-date with hardware, my focus has shifted more towards the software side of things. Just as I mastered the intricacies of hardware, I\u0026rsquo;m now determined to delve into the world of software and emerge as a master of DevOps, GitOps, and system administration. With the same dedication and hunger for knowledge that drove me in the realm of electronics, I\u0026rsquo;m excited to tackle the challenges of software development and infrastructure management head-on.\nWhat You\u0026rsquo;ll Find Here DevOps \u0026amp; GitOps: From CI/CD pipelines to Git-driven infrastructure. Self-Hosting: Managing my own homelab and orchestrating various services. Cybersecurity: How I keep my infrastructure safe and secure. Kubernetes \u0026amp; Docker: Pretty much everything I run is containerized. AI: I\u0026rsquo;m also passionate about artificial intelligence (AI), exploring self-hosted text-generation models like Phi, Llama-2, and Gemma, along with running image-generation Stable-Diffusion models. I show you how I integrate AI into various software projects and explore its potential impact. Other Hobbies \u0026amp; Interests MMA Junkie. I haven\u0026rsquo;t missed a major UFC event since I started watching in 2018. Milwaukee Bucks Fan. Born and raised in SE Wisconsin, been a Bucks fan my whole life. MCU Fanboy. I am a huge Marvel fan, especially Spider-Man. Gamer. I spend alot of time playing video games, for the last few years my main game has been Rocket League, as well as COD. But I play all styles of games. Pets. I have a dog named Knox who\u0026rsquo;s a husky-lab mix. I spend mostly all day everyday giving him pets. Why mafyuh? When I was 9 I needed a unique username for Google. My full name is Matthew, if you say mafyuh fast it sort-of sounds the same. Anyways, it stuck. The google account didn\u0026rsquo;t though :(\nPrivacy Policy Analytics I use Plausible for analytics, focusing on:\nPopular posts Optimal posting times User engagement Plausible collects minimal data:\nPage URL HTTP Referer Browser Operating system Device type Visitor Country Note Plausible uses JavaScript for tracking, allowing you to block it using browser extensions. Their code is open-source on GitHub.\n","permalink":"https://mafyuh.com/about/","summary":"Hello! 👋 I\u0026rsquo;m Matt Reeves, a DevOps and GitOps enthusiast with a passion for self-hosting.\nBefore diving into the world of DevOps and GitOps, I honed my skills as an advanced electronics repair technician, tackling complex challenges with multimeters, oscilloscopes, and soldering irons. From troubleshooting intricate circuits to mastering surface-mount technology (SMT), I thrived on solving problems and learning what\u0026rsquo;s possible in electronics.\nWhile I continue to stay up-to-date with hardware, my focus has shifted more towards the software side of things.","title":"About"},{"content":"Got questions, feedback, or just want to say hi? Feel free to reach out to me using the contact information below:\nEmail: admin[at]mafyuh[dot]com Discord Resume If you\u0026rsquo;re interested in my professional experience, you can download my resume below:\nDownload Resume ","permalink":"https://mafyuh.com/contact/","summary":"Got questions, feedback, or just want to say hi? Feel free to reach out to me using the contact information below:\nEmail: admin[at]mafyuh[dot]com Discord Resume If you\u0026rsquo;re interested in my professional experience, you can download my resume below:\nDownload Resume ","title":"Contact"},{"content":"Something I only got into recently is hosting video game servers for games that support servers. Maybe it\u0026rsquo;s just something about having another server, cause these are totally not needed. But they are pretty easy to setup thanks to the open-source community.\nSons of the Forest I wanted to play sons one day and when I looked into multiplayer I seen there were options for servers. This sparked me Googling and finding this repo.\nSetting this up took a bit, as the README was not very great. But I got it all figured out after reading GH Issues for who knows how long. Good old Linux permissions.\nHere is a link to the repo I used https://github.com/jammsen/docker-sons-of-the-forest-dedicated-server\nVM Details\nProxmox VM Ubuntu 22.04 Cloud image 4 core host 16GB RAM 100GB Storage First I created a sons folder in my home directory and cd into it. To make the games directories I run:\nmkdir game steamcmd winedata My docker-compose is the same as on GH, but it is as follows:\nversion: \u0026#39;3.9\u0026#39; services: sons-of-the-forest-dedicated-server: container_name: sons-of-the-forest-dedicated-server image: jammsen/sons-of-the-forest-dedicated-server:latest restart: always environment: ALWAYS_UPDATE_ON_START: 1 ports: - 8766:8766/udp - 27016:27016/udp - 9700:9700/udp volumes: - ./steamcmd:/steamcmd - ./game:/sonsoftheforest - ./winedata:/winedata This is in the sons folder.\nWhenever I go and play I enable the port forward rules in my pfSense. Then once I or a friend get off I disable the forwards. The logs from the container do state when in sleep mode, so I am thinking of an automation that when in sleep mode it\u0026rsquo;ll update my pfSense port forward. Maybe one day, but for now manually enable/disable. I do this as I dont want any port forwards on my network, if its just temporary like these it\u0026rsquo;s fine, but never leave a port forward open to game services if its inside your home network.\nPalworld When Palworld first came out I really wanted to mod actual Pokemon into the game, as I feel most of the Pals in the game look like AI generated garbage. But I\u0026rsquo;m no video game mod-dev and I dont see anything on the internet. (Who else loves Nintendo?) so I haven\u0026rsquo;t had this container spun up in awhile. I haven\u0026rsquo;t even played since launch, but I paid for the game and set up a server just cause.\nWhen I googled \u0026ldquo;Palworld server github\u0026rdquo;, I laughed cause the first result was the same dev as the sons server I run. I thought it was gonna be hard but they made this one simple, just follow his README.\nhttps://github.com/jammsen/docker-palworld-dedicated-server\nI run this container on the same VM as Sons, limiting IP reservations as well as vulnerable systems.\nSame thing goes for folder structure here, I just made a pal folder in home directory. I do the same thing with port forwards as I do for Sons\nThanks to the Developers on these repo\u0026rsquo;s for your work.\n","permalink":"https://mafyuh.com/posts/selfhosted-game-servers/","summary":"Something I only got into recently is hosting video game servers for games that support servers. Maybe it\u0026rsquo;s just something about having another server, cause these are totally not needed. But they are pretty easy to setup thanks to the open-source community.\nSons of the Forest I wanted to play sons one day and when I looked into multiplayer I seen there were options for servers. This sparked me Googling and finding this repo.","title":"Selfhosted Game Servers"},{"content":"1st step: Increase/resize disk from GUI console 2nd step: Extend physical drive partition and check free space with: sudo growpart /dev/sda 3 sudo pvdisplay sudo pvresize /dev/sda3 sudo pvdisplay 3rd step: Extend Logical volume sudo lvdisplay sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv sudo lvdisplay 4th step: Resize Filesystem sudo resize2fs /dev/ubuntu-vg/ubuntu-lv sudo fdisk -l ","permalink":"https://mafyuh.com/posts/resize-ubuntu-vm-disk/","summary":"1st step: Increase/resize disk from GUI console 2nd step: Extend physical drive partition and check free space with: sudo growpart /dev/sda 3 sudo pvdisplay sudo pvresize /dev/sda3 sudo pvdisplay 3rd step: Extend Logical volume sudo lvdisplay sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv sudo lvdisplay 4th step: Resize Filesystem sudo resize2fs /dev/ubuntu-vg/ubuntu-lv sudo fdisk -l ","title":"Resize Ubuntu VM Disk in Proxmox"},{"content":"This is just a quick guide on how to authenticate your authentik users with Proton using SimpleLogin OIDC.\nTo accomplish this, first create a SimpleLogin acct by logging in with Proton. Once thats done go to https://app.simplelogin.io/developer and create a website. Give it your authentik URL.\nThen go to Oauth Settings and copy your client ID and secret for next step. add your authentik URL in redirect URL like this https://auth.example.com/source/oauth/callback/simplelogin/ (simplelogin being slug of authentik)\nIn authentik go to Directory - Federation and Social login - Create and create an OpenID Oauth source\nName: SimpleLogin Slug: simplelogin User matching mode: i chose link with identical email Consumer key: Paste your key Consumer secret: Paste your secret authorization url: https://app.simplelogin.io/oauth2/authorize access token url: https://app.simplelogin.io/oauth2/token profile url: https://app.simplelogin.io/oauth2/userinfo OIDC Well-known URL: https://app.simplelogin.io/.well-known/openid-configuration\nFor logo, it appears authenik inverts your image, I dont know if its dark mode or bug but regardless here\u0026rsquo;s the regular and inverted image I used. Just right click and save image:\nNow go to Flows and Stages - Flows - choose your default authentication stage - click it then click stage bindings - Click edit stage to the right of your identification stage - expand Source settings and make sure you CTL + click your newly created SimpleLogin source.\nYou should be able to logout and try to to login with your Proton account!\n","permalink":"https://mafyuh.com/posts/proton-mail-authentik-social-login-setup/","summary":"This is just a quick guide on how to authenticate your authentik users with Proton using SimpleLogin OIDC.\nTo accomplish this, first create a SimpleLogin acct by logging in with Proton. Once thats done go to https://app.simplelogin.io/developer and create a website. Give it your authentik URL.\nThen go to Oauth Settings and copy your client ID and secret for next step. add your authentik URL in redirect URL like this https://auth.","title":"Proton Mail - SimpleLogin authentik Social Login Setup"},{"content":"I wanted a way to automate when users tell me a video on my Jellyfin server has an issue. After alot of trial and error, ChatGPT, Bard and I came up with this automation.\nRequirements My only requirements when making this was that it was free and self-hostable. Not even any NPM extensions are required in AP. Actual Software requirements are:\nSonarr Radarr Overseerr/Jellyseerr Optional\nSMTP server or ability to send SMTP messages (can also use discord) ActivePieces or any other automation platform that supports TS. (Zapier, n8n, etc) Here\u0026rsquo;s a great AP setup and how-to video:\nNote: I didn\u0026rsquo;t do any of the ngrok stuff. I just have Nginx Proxy manager setup with a wildcard certificate. Then just give a domain name and point and its ip:8080. No special Nginx config needed. Make sure you set AP_FRONTEND_URL in .env\nThis blog post is rather long, if you prefer to see the code on git you can find all this code here.\nHow it Works Whenever a user Reports an Issue in Jellyseerr, a webhook is sent to activepieces (AP) with the Issue data, this triggers the automation to mark as failed, delete file, re-search, refresh Jellyfin Libraries and Resolve the original issue with comment. There is an optional feature to approve or deny the automation.\nWorks across Radarr and Sonarr, as the issue reported can be either Movie or TV show.\nOnly caveat is if the issue is an entire Season , we just mark the issue as resolved and leave a comment saying to submit an issue for each episode individually\nWorks on my Jellyfin, Jellyseer, Radarr and Sonarr setup. I dont use Plex but all you would have to change is the Jellyfin Refresh Library Request to match Plex\u0026rsquo;s equivalent.\nHere is a pic of the full automation.\nEverything of value is logged to the console so check there for errors. Lets start breaking it down.\n#1 Jellyseer Issue Reported First thing is create a flow in AP, select a trigger, and search for webhook. This will give you the webhook URL for Jellyseerr. Next, in Jellyseerr, under Settings - Users - Default Permissions make sure Report Issues is checked and save changes. Then under Settings - Notifications - Webhook make a webhook notification, with the URL from AP, and just enabling Issue Reported and Issue Reopened. This should look as follows (dont worry about my payload showing mediaId, this has since been deleted)\nHere is my full JSON payload just in case:\n{ \u0026#34;notification_type\u0026#34;: \u0026#34;{{notification_type}}\u0026#34;, \u0026#34;event\u0026#34;: \u0026#34;{{event}}\u0026#34;, \u0026#34;subject\u0026#34;: \u0026#34;{{subject}}\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;{{message}}\u0026#34;, \u0026#34;image\u0026#34;: \u0026#34;{{image}}\u0026#34;, \u0026#34;{{media}}\u0026#34;: { \u0026#34;media_type\u0026#34;: \u0026#34;{{media_type}}\u0026#34;, \u0026#34;tmdbId\u0026#34;: \u0026#34;{{media_tmdbid}}\u0026#34;, \u0026#34;tvdbId\u0026#34;: \u0026#34;{{media_tvdbid}}\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;{{media_status}}\u0026#34;, \u0026#34;status4k\u0026#34;: \u0026#34;{{media_status4k}}\u0026#34; }, \u0026#34;{{request}}\u0026#34;: { \u0026#34;request_id\u0026#34;: \u0026#34;{{request_id}}\u0026#34;, \u0026#34;requestedBy_email\u0026#34;: \u0026#34;{{requestedBy_email}}\u0026#34;, \u0026#34;requestedBy_username\u0026#34;: \u0026#34;{{requestedBy_username}}\u0026#34;, \u0026#34;requestedBy_avatar\u0026#34;: \u0026#34;{{requestedBy_avatar}}\u0026#34;, \u0026#34;requestedBy_settings_discordId\u0026#34;: \u0026#34;{{requestedBy_settings_discordId}}\u0026#34;, \u0026#34;requestedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{requestedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{issue}}\u0026#34;: { \u0026#34;issue_id\u0026#34;: \u0026#34;{{issue_id}}\u0026#34;, \u0026#34;issue_type\u0026#34;: \u0026#34;{{issue_type}}\u0026#34;, \u0026#34;issue_status\u0026#34;: \u0026#34;{{issue_status}}\u0026#34;, \u0026#34;reportedBy_email\u0026#34;: \u0026#34;{{reportedBy_email}}\u0026#34;, \u0026#34;reportedBy_username\u0026#34;: \u0026#34;{{reportedBy_username}}\u0026#34;, \u0026#34;reportedBy_avatar\u0026#34;: \u0026#34;{{reportedBy_avatar}}\u0026#34;, \u0026#34;reportedBy_settings_discordId\u0026#34;: \u0026#34;{{reportedBy_settings_discordId}}\u0026#34;, \u0026#34;reportedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{reportedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{comment}}\u0026#34;: { \u0026#34;comment_message\u0026#34;: \u0026#34;{{comment_message}}\u0026#34;, \u0026#34;commentedBy_email\u0026#34;: \u0026#34;{{commentedBy_email}}\u0026#34;, \u0026#34;commentedBy_username\u0026#34;: \u0026#34;{{commentedBy_username}}\u0026#34;, \u0026#34;commentedBy_avatar\u0026#34;: \u0026#34;{{commentedBy_avatar}}\u0026#34;, \u0026#34;commentedBy_settings_discordId\u0026#34;: \u0026#34;{{commentedBy_settings_discordId}}\u0026#34;, \u0026#34;commentedBy_settings_telegramChatId\u0026#34;: \u0026#34;{{commentedBy_settings_telegramChatId}}\u0026#34; }, \u0026#34;{{extra}}\u0026#34;: [] } You should be able to Report an issue on a random movie in Jellyseerr and then go to the webhook trigger and choose Generate sample data, and you should be able to see the data from the request. I recommend doing this and creating an issue for an example movie, TV series( All Seasons) and a TV Series (1 Season)\n(Optional) #2 Create Approval Links In AP add the next step and search Approval, then create approval links.\n(Optional) #3 Send Email This is so I can either approve or deny the file from being deleted. Maybe it\u0026rsquo;s a client issue and I know for a fact my file is good and I dont want deleted. Thus the links are sent to me along with the some data from the request, so I know what I am approving/denying.\nYou can use the core SMTP feature but its limited to text. I wanted some more customizability so I chose Resend (supports html) and set up an acct there with one of my extra domains.\nYou don\u0026rsquo;t have to use email, you can use Discord, SMS, any generic http request or whatever you want. I just use email since I pay for my domains and pay Proton Mail for emails so might as well use em.\nNot gonna get too into this, I dont care too much about it atm, customize your email to your liking, but I\u0026rsquo;ll post my somewhat working HTML body here. I literally just copied what Bard gave me, added in data from response and tested and said looks good enough, glitches on my mobile too.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width, initial-scale=1.0\u0026#34;\u0026gt; \u0026lt;title\u0026gt;Jellyseerr Issue Reported\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt; body { font-family: sans-serif; margin: 0; padding: 0; background-color: #222; color: #fff; } .container { width: 80%; margin: 0 auto; padding: 20px; background-color: #333; border-radius: 10px; box-shadow: 0px 2px 5px rgba(0, 0, 0, 0.1); } .header { display: flex; justify-content: space-between; align-items: center; padding-bottom: 20px; border-bottom: 1px solid #555; } .header h1 { font-size: 24px; font-weight: bold; margin: 0; color: #fff; } .header img { width: 50px; height: 50px; border-radius: 50%; object-fit: cover; } .content { margin: 0 auto; text-align: center; } .issue-subject { font-size: 18px; font-weight: bold; margin-bottom: 10px; color: #fff; } .issue-message { font-size: 16px; line-height: 1.5; margin-bottom: 20px; color: #ccc; } .issue-image { width: 100%; height: auto; margin-bottom: 20px; } .buttons { display: flex; justify-content: space-between; } .button { background-color: #007bff; color: #fff; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #0056b3; } .disapprove-button { background-color: #dc3545; color: #fff; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .disapprove-button:hover { background-color: #bd2830; } \u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div class=\u0026#34;container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;header\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;https://your-logo-url\u0026#34; alt=\u0026#34;Jellyseerr Logo\u0026#34;\u0026gt; \u0026lt;h1\u0026gt;Jellyseerr Issue Reported\u0026lt;/h1\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;content\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;issue-subject\u0026#34;\u0026gt; Jellyseerr Issue Reported \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;issue-message\u0026#34;\u0026gt; This issue was submitted by 1. Jellyseerr Issue Reported body issue reportedBy_username. \u0026lt;br\u0026gt; The reason for the issue:1. Jellyseerr Issue Reported body message \u0026lt;br\u0026gt; Please review the issue and take appropriate action. \u0026lt;br\u0026gt; \u0026lt;img src=\u0026#34; 1. Jellyseerr Issue Reported body image \u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;buttons\u0026#34;\u0026gt; \u0026lt;a href=\u0026#34;2. Create Approval Links approvalLink \u0026#34;\u0026gt;\u0026lt;button class=\u0026#34;button\u0026#34;\u0026gt;Approve\u0026lt;/button\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;a href=\u0026#34;2. Create Approval Links disapprovalLink \u0026#34;\u0026gt;\u0026lt;button class=\u0026#34;disapprove-button\u0026#34;\u0026gt;Deny\u0026lt;/button\u0026gt;\u0026lt;/a\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; And here\u0026rsquo;s what an email looks like:\n(Optional) #4 Wait for Approval Pauses flow until I approve or deny.\n#5 Radarr/Sonarr Branch As stated previously, I wanted this to work regardless if Movie or TV show. So using the core Branch feature we just say that if the media_type value from the issue contains the text movie, its true.\n#6 Radarr Mark As Failed All I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code (CASE SENSITIVE)\nHere is the code. Just replace api key and base url:\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key and base URL const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3/\u0026#39;; // Define a function to make API requests to Radarr const makeRadarrRequest = async (endpoint, method = \u0026#39;GET\u0026#39;) =\u0026gt; { const apiUrl = radarrBaseUrl + endpoint; console.log(`Calling Radarr API: ${apiUrl}`); const response = await fetch(apiUrl, { method, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (response.ok) { return await response.json(); } else { console.error(`Radarr API request failed: ${response.statusText}`); return null; } }; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID const radarrApiResponseData = await makeRadarrRequest(`movie?tmdbId=${tmdbId}`); if (radarrApiResponseData \u0026amp;\u0026amp; radarrApiResponseData.length \u0026gt; 0) { const movieId = radarrApiResponseData[0].id; // Get the Radarr ID of the first movie console.log(\u0026#39;Radarr Movie ID:\u0026#39;, movieId); // Use the Radarr movie ID to get the history of the movie const historyApiResponseData = await makeRadarrRequest(`history/movie?movieId=${movieId}`); if (historyApiResponseData \u0026amp;\u0026amp; historyApiResponseData.length \u0026gt; 0) { const historyId = historyApiResponseData[0].id; // Get the history ID console.log(\u0026#39;History ID:\u0026#39;, historyId); // Use the history ID to mark the movie as failed const markFailedResponse = await makeRadarrRequest(`history/failed/${historyId}`, \u0026#39;POST\u0026#39;); if (markFailedResponse) { console.log(\u0026#39;Movie successfully marked as failed.\u0026#39;); } else { console.error(\u0026#39;Failed to mark movie as failed\u0026#39;); } } else { console.error(\u0026#39;No history found for movie ID:\u0026#39;, movieId); } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } }; #7 Delay 5 seconds Give time to process.\n#8 Delete Movie File I didn\u0026rsquo;t want to delete the actual movie from Radarr, but just the file itself, thus alot of trial and error, but a working script. All I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3\u0026#39;; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID and get the Radarr ID const radarrApiUrl = `${radarrBaseUrl}/movie?tmdbId=${tmdbId}`; console.log(\u0026#39;Calling Radarr API to look up the movie...\u0026#39;); const radarrApiResponse = await fetch(radarrApiUrl, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (radarrApiResponse.ok) { console.log(\u0026#39;Radarr API lookup successful.\u0026#39;); const radarrApiResponseData = await radarrApiResponse.json(); if (radarrApiResponseData.length \u0026gt; 0) { // If the response is an array, you should loop through the results // and access the Radarr ID for each movie. for (const movie of radarrApiResponseData) { const radarrMovieId = movie.movieFile.id; console.log(\u0026#39;Radarr Movie ID:\u0026#39;, radarrMovieId); // Use the Radarr movie ID to delete the corresponding movie file const deleteMovieFileUrl = `${radarrBaseUrl}/movieFile/${radarrMovieId}`; console.log(`Calling Radarr API to delete movie file: ${deleteMovieFileUrl}`); const deleteMovieFileResponse = await fetch(deleteMovieFileUrl, { method: \u0026#39;DELETE\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (deleteMovieFileResponse.ok) { console.log(`Movie file successfully deleted.`); } else { console.error(`Failed to delete movie file: ${deleteMovieFileResponse.statusText}`); } } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } else { console.error(\u0026#39;Radarr API lookup failed:\u0026#39;, radarrApiResponse.statusText); } } }; #9 Delay 5 seconds #10 Search in Radarr Researches for movie just deleted.\nAll I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const movieNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = movieNameRegex.exec(issueSubject); if (match) { const movieName = match[1]; const year = match[2]; const tmdbId = inputs.issue.media.tmdbId; console.log(`Movie name: ${movieName}`); console.log(`Year: ${year}`); console.log(`TMDB ID: ${tmdbId}`); // Define your Radarr API key const radarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Radarr API key const radarrBaseUrl = \u0026#39;https://radarr.example.com/api/v3\u0026#39; // Use Radarr\u0026#39;s API to look up the movie by TMDB ID and get the Radarr ID const radarrApiUrl = `${radarrBaseUrl}/movie?tmdbId=${tmdbId}`; console.log(\u0026#39;Calling Radarr API to look up the movie...\u0026#39;); const radarrApiResponse = await fetch(radarrApiUrl, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, }, }); if (radarrApiResponse.ok) { console.log(\u0026#39;Radarr API lookup successful.\u0026#39;); const radarrApiResponseData = await radarrApiResponse.json(); if (radarrApiResponseData.length \u0026gt; 0) { const movieId = radarrApiResponseData[0].id; // Get the Radarr ID of the first movie console.log(\u0026#39;Radarr Movie ID:\u0026#39;, movieId); // Trigger Radarr to search for the movie and download const searchUrl = `${radarrBaseUrl}/command`; console.log(`Calling Radarr API to search for the movie: ${searchUrl}`); const searchRequestBody = { name: \u0026#39;MoviesSearch\u0026#39;, movieIds: [movieId], }; const searchResponse = await fetch(searchUrl, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: radarrApiKey, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, }, body: JSON.stringify(searchRequestBody), }); if (searchResponse.ok) { console.log(\u0026#39;Radarr movie search initiated.\u0026#39;); } else { console.error(`Failed to initiate movie search: ${searchResponse.statusText}`); } } else { console.error(\u0026#39;No movies found for TMDB ID:\u0026#39;, tmdbId); } } else { console.error(\u0026#39;Radarr API lookup failed:\u0026#39;, radarrApiResponse.statusText); } } }; #11 Delay 4 minutes This gives your download client time to download and transfer file to mapped directory. I have Gig+ internet and 99% of the time everything is done in 4 minutes.\n#12 Scan JF Libraries Using core HTTP feature, send a http POST request to https://jellyfin.domain.com/Library/Refresh with Headers X-MediaBrowser-Token and value is your Jellyfin API Key\nI only do this as Jellyfin doesn\u0026rsquo;t scan my NAS whenever I add a new file.\n#13 Add Comment/Resolve Issue This just automatically resolves the issue in Jellyseerr and adds a comment letting the user know action was taken.\nAll I do here is the Code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Your issue has been approved and a new version of the content has been automatically downloaded and updated in Jellyfin. Your issue has been set to Resolved. If you are still experiencing problems, re-open your issue.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, // or PUT depending on your API headers: headers, // Add any additional data required to update the status }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; We are now done with the Radarr flow. Moving onto Sonarr.\n#14 Branch Episodes and Seasons With the issue data, we also get an \u0026ldquo;extra\u0026rdquo; field which is where the requests Affected Episode Number and Affected Season Number are. What this branch does is see if there is an affected Episode Number by seeing if that field in the data exists. You will have to create an issue for a TV show and say an entire season is affected. Then use that sample data, go back to this branch and add the value\nJellyseerr Issue Reported body extra 1 as pictured #15 Add Comment/Resolve Issue This path meant the user reported an issue on an entire season and basically sends a response to them telling them to do it individually. I probably could have gotten a script working for this but I spent a few hours on it and frustratingly gave up. Maybe I will update this in the future but for now idrc.\nAgain, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Please do not report an entire season as the issue. Specify each Episode number. Please delete this issue and resubmit. Your issue has been automatically marked as Resolved.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; #16 Mark as Failed Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; // Using TVDB ID for TV shows console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); // Define your Sonarr API key and base URL const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your Sonarr API key const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; // Use Sonarr\u0026#39;s API to look up the series by TVDB ID and get the Sonarr ID const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; // Find the affected season and episode numbers const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); console.log(\u0026#34;Season ID = \u0026#34; + affectedSeason); console.log(\u0026#34;Episode ID = \u0026#34; + affectedEpisode); // Get the history of the series const historyResponse = await fetch(`${sonarrBaseUrl}/history/series?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (historyResponse.ok) { const historyData = await historyResponse.json(); // Find the most recent entry that matches the affected season and episode const recentEntry = historyData.find(entry =\u0026gt; { const sourceTitleMatch = /S(\\d+)E(\\d+)/.exec(entry.sourceTitle); if (sourceTitleMatch) { const sourceSeason = parseInt(sourceTitleMatch[1]); const sourceEpisode = parseInt(sourceTitleMatch[2]); return sourceSeason === affectedSeason \u0026amp;\u0026amp; sourceEpisode === affectedEpisode; } return false; }); if (recentEntry) { const episodeId = recentEntry.episodeId; const id = recentEntry.id; // This is the ID you need for marking as failed console.log(\u0026#34;Found Episode ID = \u0026#34; + episodeId); console.log(\u0026#34;Found Most Recent Download ID = \u0026#34; + id); // Use the episode ID to mark the episode as failed const markFailedUrl = `${sonarrBaseUrl}/history/failed/${id}`; console.log(`Calling Sonarr API to mark episode as failed: ${markFailedUrl}`); const markFailedResponse = await fetch(markFailedUrl, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, body: JSON.stringify({ status: \u0026#39;failed\u0026#39; }), }); if (markFailedResponse.ok) { console.log(\u0026#39;Episode successfully marked as failed in Sonarr.\u0026#39;); } else { console.error(`Failed to mark episode as failed in Sonarr: ${markFailedResponse.statusText}`); } } else { console.error(\u0026#39;No matching entry found in the series history for the affected episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch series history:\u0026#39;, historyResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; You may have to play around a bit and see if when you run this it auto searches for the file. My Sonarr does but my Radarr doesn\u0026rsquo;t, couldnt find any setting. Regardless I include a search command and even if Sonarr searches 2 times it appears 1 will cancel out. This is why no time delay between this code and file deletion.\n#17 Delete File Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); const episodeFilesResponse = await fetch(`${sonarrBaseUrl}/episodefile?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (episodeFilesResponse.ok) { const episodeFilesData = await episodeFilesResponse.json(); const targetEpisode = episodeFilesData.find(episode =\u0026gt; { const parsedPath = episode.relativePath.match(/S(\\d+)E(\\d+)/); if (parsedPath) { const episodeSeason = parseInt(parsedPath[1]); const episodeNumber = parseInt(parsedPath[2]); return episodeSeason === affectedSeason \u0026amp;\u0026amp; episodeNumber === affectedEpisode; } return false; }); if (targetEpisode) { const targetEpisodeId = targetEpisode.id; console.log(\u0026#34;Found Episode ID = \u0026#34; + targetEpisodeId); // Delete the target episode file const deleteEpisodeUrl = `${sonarrBaseUrl}/episodefile/${targetEpisodeId}`; const deleteEpisodeResponse = await fetch(deleteEpisodeUrl, { method: \u0026#39;DELETE\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (deleteEpisodeResponse.ok) { console.log(\u0026#39;Episode file successfully deleted in Sonarr.\u0026#39;); } else { console.error(`Failed to delete episode file in Sonarr: ${deleteEpisodeResponse.statusText}`); } } else { console.error(\u0026#39;No matching episode found in the episode files for the affected season and episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch episode files:\u0026#39;, episodeFilesResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; #18 Re-search in Sonarr Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueSubject = inputs.issue.subject; const tvShowNameRegex = /(.*)\\s\\((\\d{4})\\)/; const match = tvShowNameRegex.exec(issueSubject); if (match) { const tvShowName = match[1]; const year = match[2]; const tvdbId = inputs.issue.media.tvdbId; console.log(`TV Show name: ${tvShowName}`); console.log(`Year: ${year}`); console.log(`TVDB ID: ${tvdbId}`); const sonarrApiKey = \u0026#39;your-api-key\u0026#39;; const sonarrBaseUrl = \u0026#39;https://sonarr.example.com/api/v3\u0026#39;; const seriesResponse = await fetch(`${sonarrBaseUrl}/series/lookup?term=tvdb:${tvdbId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (seriesResponse.ok) { const seriesData = await seriesResponse.json(); if (seriesData.length \u0026gt; 0) { const seriesId = seriesData[0].id; const affectedSeason = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Season\u0026#39;)?.value); const affectedEpisode = parseInt(inputs.issue.extra.find(item =\u0026gt; item.name === \u0026#39;Affected Episode\u0026#39;)?.value); const historyResponse = await fetch(`${sonarrBaseUrl}/history/series?seriesId=${seriesId}`, { method: \u0026#39;GET\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, }, }); if (historyResponse.ok) { const historyData = await historyResponse.json(); const recentEntry = historyData.find(entry =\u0026gt; { const sourceTitleMatch = /S(\\d+)E(\\d+)/.exec(entry.sourceTitle); if (sourceTitleMatch) { const sourceSeason = parseInt(sourceTitleMatch[1]); const sourceEpisode = parseInt(sourceTitleMatch[2]); return sourceSeason === affectedSeason \u0026amp;\u0026amp; sourceEpisode === affectedEpisode; } return false; }); if (recentEntry) { const episodeId = recentEntry.episodeId; console.log(\u0026#34;Found Episode ID = \u0026#34; + episodeId); // Perform the episode search const searchPayload = { name: \u0026#39;EpisodeSearch\u0026#39;, episodeIds: [episodeId], }; const searchResponse = await fetch(`${sonarrBaseUrl}/command`, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;X-Api-Key\u0026#39;: sonarrApiKey, \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, }, body: JSON.stringify(searchPayload), }); if (searchResponse.ok) { console.log(\u0026#39;Episode search command successfully sent to Sonarr.\u0026#39;); } else { console.error(`Failed to send episode search command to Sonarr: ${searchResponse.statusText}`); } } else { console.error(\u0026#39;No matching entry found in the series history for the affected episode.\u0026#39;); } } else { console.error(\u0026#39;Failed to fetch series history:\u0026#39;, historyResponse.statusText); } } else { console.error(\u0026#39;No series found for the provided TVDB ID:\u0026#39;, tvdbId); } } else { console.error(\u0026#39;Failed to fetch series data:\u0026#39;, seriesResponse.statusText); } } }; #19 Delay for 4 Minutes Waiting for media to download and transfer.\n#20 Add Comment/Resolve Issue Again, all I do here is the code function with 1 input which is the whole body message of the request, this is assigned to inputs.issue in the code\nexport const code = async (inputs) =\u0026gt; { const issueId = inputs.issue.issue_id; const apiKey = \u0026#39;your-api-key\u0026#39;; // Replace with your actual API key const baseURL = \u0026#39;https://jellyseerr.example.com/api/v1\u0026#39; const commentApiUrl = `${baseURL}/issue/${issueId}/comment`; const statusApiUrl = `${baseURL}/issue/${issueId}/resolved`; const headers = { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;X-Api-Key\u0026#39;: apiKey, }; const commentData = { message: \u0026#39;Your issue has been approved and a new version of the content has been automatically downloaded and updated in Jellyfin. Your issue has been set to Resolved. If you are still experiencing problems, re-open your issue.\u0026#39;, }; const commentRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, body: JSON.stringify(commentData), }; try { // Post comment const commentResponse = await fetch(commentApiUrl, commentRequestOptions); const commentData = await commentResponse.json(); console.log(commentData); // Update status const statusRequestOptions = { method: \u0026#39;POST\u0026#39;, headers: headers, }; const statusResponse = await fetch(statusApiUrl, statusRequestOptions); const statusData = await statusResponse.json(); console.log(statusData); return true; } catch (error) { console.error(error); return false; } }; #21 Same as #12 Conclusion Once all this is done you can publish the flow and try it out!\nIf you have any feedback you can DM on Reddit. I\u0026rsquo;d love to see how you have edited this automation to your exact needs.\nNow the hard part, getting your users to actually report the issues in Jellyseerr and not reach out to you!\n","permalink":"https://mafyuh.com/posts/how-to-automate-jellyfin-issue-handling/","summary":"I wanted a way to automate when users tell me a video on my Jellyfin server has an issue. After alot of trial and error, ChatGPT, Bard and I came up with this automation.\nRequirements My only requirements when making this was that it was free and self-hostable. Not even any NPM extensions are required in AP. Actual Software requirements are:\nSonarr Radarr Overseerr/Jellyseerr Optional\nSMTP server or ability to send SMTP messages (can also use discord) ActivePieces or any other automation platform that supports TS.","title":"How To Automate Jellyfin Issue Handling"},{"content":"authentik\u0026rsquo;s docs have a guide already for Guacamole. You can find that here. Follow all the instructions there, (especially the part where you create a user in Guacamole with the USERNAME of your email. not just filling in the email), but if you are using Cloudflare as our DNS you may run into problems. Such as infinite redirect loop.\nError 403 Forbidden While it was looping, I checked my Guacamole docker container logs in Portainer, and found the 403 Forbidden error.\n22:03:59.418 [http-nio-8080-exec-2] INFO o.a.g.a.o.t.TokenValidationService - Rejected invalid OpenID token: JWT processing failed. Additional details: [[17] Unable to process JOSE object (cause: org.jose4j.lang.UnresolvableKeyException: Unable to find a suitable verification key for JWS w/ header {\u0026#34;alg\u0026#34;:\u0026#34;RS256\u0026#34;,\u0026#34;kid\u0026#34;:\u0026#34;xxx\u0026#34;,\u0026#34;typ\u0026#34;:\u0026#34;JWT\u0026#34;} due to an unexpected exception (java.io.IOException: Non 200 status code (403 Forbidden) returned from https://example.com/application/o/guacamole/jwks/?exclude_x5) while obtaining or using keys from JWKS endpoint at https://example.com/application/o/guacamole/jwks/?exclude_x5): JsonWebSignature{\u0026#34;alg\u0026#34;:\u0026#34;RS256\u0026#34;,\u0026#34;kid\u0026#34;:\u0026#34;xxx\u0026#34;,\u0026#34;typ\u0026#34;:\u0026#34;JWT\u0026#34;} I assumed it had something to do with my Nginx Proxy Manager and the way I was proxying Guacamole, I do have WebSocket support and block common exploits enabled, their docs give a nginx config that I added under advanced.\nlocation /guacamole/ { proxy_pass http://HOSTNAME:8080; proxy_buffering off; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; access_log off; } I messed around with settings individually for hours, reading their docs, tried oznu\u0026rsquo;s Guacamole image as well, this time with errors about the postgres version being incompatible. Figured it could be something with Cloudflare so turned down my HTTPS settings. Nada. Tried SAML, more errors. Finally found this github issue and thanks to Fma965 for finding the solution.\nGo to your Cloudflare Dashboard. Click on your domains summary and then on the left tab find Rules.\nUnder Page Rules - Create a New Page Rule, set the URL as your jwks URL from authentik\u0026rsquo;s provider summary. Under pick a setting, choose Browser Integrity Check and make sure its unchecked. Save.\nThis finally got me authenticated into my Guacamole instance via authentik. I spent way too much time on this integration. Anyways, hope this guide helps someone who may be in my shoes.\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-guacamole-authentik-nginxproxymanager/","summary":"authentik\u0026rsquo;s docs have a guide already for Guacamole. You can find that here. Follow all the instructions there, (especially the part where you create a user in Guacamole with the USERNAME of your email. not just filling in the email), but if you are using Cloudflare as our DNS you may run into problems. Such as infinite redirect loop.\nError 403 Forbidden While it was looping, I checked my Guacamole docker container logs in Portainer, and found the 403 Forbidden error.","title":"How to authenticate Guacamole via authentik with Cloudflare and Nginx Proxy Manager"},{"content":"If you are getting error messages like:\n422: the change you wanted was rejected. message from saml: actioncontroller::invalidauthenticitytoken Just make sure you set these in your Nginx Proxy Manager hosts Advanced field:\nlocation / { proxy_pass http://zammad:8080; # Replace proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Host $host; } I spent way too long trying to figure this out, reading through Github issues, breaking my SAML provider and Zammad configs, starting over, when the whole time it was just good old nginx header issues.\nHope this helps someone out. Fix was found on this rails github issue.\n(https://github.com/rails/rails/issues/22965)\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-zammad-via-saml-with-nginx-proxy-manager/","summary":"If you are getting error messages like:\n422: the change you wanted was rejected. message from saml: actioncontroller::invalidauthenticitytoken Just make sure you set these in your Nginx Proxy Manager hosts Advanced field:\nlocation / { proxy_pass http://zammad:8080; # Replace proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Host $host; } I spent way too long trying to figure this out, reading through Github issues, breaking my SAML provider and Zammad configs, starting over, when the whole time it was just good old nginx header issues.","title":"How to authenticate Zammad via SAML with Nginx Proxy Manager"},{"content":"You could do this with OpenID as well but this method is using SAML. This guide assumes you already have running instances of Kasm Workspaces and authentik.\nThe official authentik docs dont have a Kasm Integration listed at the time. So I thought I would help out anyone who is trying to integrate these services via SAML. authentik\u0026rsquo;s SAML docs can be found here.\nSetting up Kasm In the Kasm Workspaces admin, click Access Management - Authentication - SAML and create a new configuration. Make sure you enable and make default after testing. You will probably find yourself switching between tabs alot, its best to start creating them both at the same time as you need links from each.\nDisplay Name: authentik Logo URL: https://auth.example.com/static/dist/assets/icons/icon.svg (or custom logo) Host Name: authentik NameID Attribute: emailAddress Entity ID: authentik Single Sign On Service/SAML 2.0 Endpoint: https://auth.example.com/application/saml/kasm/sso/binding/redirect/ X509 Certificate: Skip to authentik setup first, then come back here. In authentik admin, go to your newly created SAML provider, when you click the provider and are brought to its details, you should have the option to Download signing certificate. Download it and paste the files contents here. Setting up authentik In the authentik admin, under Applications, create a new SAML provider. Once you created a provider, create an Application and set its provider to the newly created kasm provider. For simplicity sake, the provider and application name is kasm. (kasms pictured)\nAuthorization flow: implicit ACS URL: https://kasm.example.com/api/acs/?id=e977b6cf72c7424328275db5f48fd15ab (Single Sign-On Service from kasm photo) Issuer: authentik (must be the same as Entity ID chosen in Kasm) Service Binding Provider: Post Audience: https://kasm.example.com/api/metadata/?id=e977b6cf72c7424328275db5f48fd15ab ( Entity ID URL from Kasm photo) Under Advanced, choose a signing certificate, default is authentik. Go back to Kasm x509 Certificate. Make sure you save you changes. You should now be able to test SAML at the bottom of the page, once tested, I recommend opening a incognito tab and loading your KASM website.\nYou should now be able to authenticate yourself using SAML via authentik. I recommend going back to your admin session and adding your newly created user to the admin group. Also if it should only be you accessing this via authentik, I would change the kasm Application in authentik and bind it to your user.\nThank you to u/agent-squirrel and this subreddit post on helping me with the NameID Attribute part!\n","permalink":"https://mafyuh.com/posts/how-to-authenticate-kasm-via-authentik/","summary":"You could do this with OpenID as well but this method is using SAML. This guide assumes you already have running instances of Kasm Workspaces and authentik.\nThe official authentik docs dont have a Kasm Integration listed at the time. So I thought I would help out anyone who is trying to integrate these services via SAML. authentik\u0026rsquo;s SAML docs can be found here.\nSetting up Kasm In the Kasm Workspaces admin, click Access Management - Authentication - SAML and create a new configuration.","title":"How To Authenticate KASM via authentik"},{"content":"To \u0026lsquo;Show more options\u0026rsquo; by default in File Explorer, open Command Prompt as Administrator, then type or paste the following command:\nreg add HKCU\\Software\\Classes\\CLSID\\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\\InprocServer32 /ve /d \u0026#34;\u0026#34; /f and hit Enter.\n","permalink":"https://mafyuh.com/posts/how-to-show-more-options-by-default-in-windows-11/","summary":"To \u0026lsquo;Show more options\u0026rsquo; by default in File Explorer, open Command Prompt as Administrator, then type or paste the following command:\nreg add HKCU\\Software\\Classes\\CLSID\\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\\InprocServer32 /ve /d \u0026#34;\u0026#34; /f and hit Enter.","title":"How to Show More Options By Default in Windows 11"},{"content":"This is just a visual representations of how my current setup flows.\nI have some of the docker-compose files that make up this infra on my Gitea\n","permalink":"https://mafyuh.com/posts/network-traffic-map/","summary":"This is just a visual representations of how my current setup flows.\nI have some of the docker-compose files that make up this infra on my Gitea","title":"Network Traffic Map"},{"content":"Just a straight forward list of pretty much everything that makes up my homelab. Or systems I\u0026rsquo;ve ran in the past.\nOperating Systems\nUbuntu 23.04 Ubuntu 22.04 (primary on most systems) CentOS/Fedora 38 (only when Ubuntu doesnt play nice) Debian 11 Proxmox 8 Windows 10/11 TrueNAS Scale (virtualized) CasaOS (zimaboard) pfSense Applications/Containers\nNginx Proxy Manager Nginx Apache2 Traefik Authentik Portainer Yacht AdGuardHome Pihole Wazuh Zabbix Uptime Kuma Ghost (this blog) Wordpress Hydroxide (proton mail bridge) Calibre Smokeping Openspeedtest Grafana Prometheus InfluxDB PostgresSQL MySQL Watchtower Apache Guacamole Ansible Terraform Packer Vaultwarden Kasm Workspaces Jellyfin Plex Twingate Tailscale Headscale Wireguard LinkStack N8N Gotify Nextcloud Immich AI\nGPT4ALL Stable Diffusion LocalAI Auto-GPT Comfy UI Arr Suite\nRadarr Sonarr Prowlarr Lidarr Jellyseer Tdarr Requesterr Real Debrid Client Wizarr ","permalink":"https://mafyuh.com/posts/software/","summary":"Just a straight forward list of pretty much everything that makes up my homelab. Or systems I\u0026rsquo;ve ran in the past.\nOperating Systems\nUbuntu 23.04 Ubuntu 22.04 (primary on most systems) CentOS/Fedora 38 (only when Ubuntu doesnt play nice) Debian 11 Proxmox 8 Windows 10/11 TrueNAS Scale (virtualized) CasaOS (zimaboard) pfSense Applications/Containers\nNginx Proxy Manager Nginx Apache2 Traefik Authentik Portainer Yacht AdGuardHome Pihole Wazuh Zabbix Uptime Kuma Ghost (this blog) Wordpress Hydroxide (proton mail bridge) Calibre Smokeping Openspeedtest Grafana Prometheus InfluxDB PostgresSQL MySQL Watchtower Apache Guacamole Ansible Terraform Packer Vaultwarden Kasm Workspaces Jellyfin Plex Twingate Tailscale Headscale Wireguard LinkStack N8N Gotify Nextcloud Immich AI","title":"Software"},{"content":"Most of my infrastructure is hosted on my in-lab Proxmox server, along with a few new machines for dedicated services. Here are some of the specs of some of the in-lab machines.\nProxmox Server CPU: Intel Core i7-9700K GPU: Nvidia GeForce GTX 1660 6GB RAM: 64GB DDR4 3000Mhz NVME SSD\u0026rsquo;s for storage 4x 4TB HDD\u0026rsquo;s (passthrough to NAS) Gaming PC CPU: Intel Core i7-13700K GPU: Nvidia GeForce RTX 3080 RAM: 64GB DDR5 6000 Mhz SSD: Samsung 980 Pro 2TB Mobo: MPG Z790 EDGE WIFI Windows 11 Pro Main PC used for everything. I just remote into every other machine. Yes, it is on top of my mini-fridge. Yes, my cable management is terrible.\nNetworking ISP: Xfinity. Coax currently getting 2.0Gbps download and 80mbps upload. (my monitoring in lab averages 2.21Gbps down and 76mbps up) Router: pfSense Box AP\u0026rsquo;s: TP-Link Deco XE75 PRO (x3) WIFI 6E Mesh Switch: TRENDnet 6-port 10G ","permalink":"https://mafyuh.com/posts/hardware/","summary":"Most of my infrastructure is hosted on my in-lab Proxmox server, along with a few new machines for dedicated services. Here are some of the specs of some of the in-lab machines.\nProxmox Server CPU: Intel Core i7-9700K GPU: Nvidia GeForce GTX 1660 6GB RAM: 64GB DDR4 3000Mhz NVME SSD\u0026rsquo;s for storage 4x 4TB HDD\u0026rsquo;s (passthrough to NAS) Gaming PC CPU: Intel Core i7-13700K GPU: Nvidia GeForce RTX 3080 RAM: 64GB DDR5 6000 Mhz SSD: Samsung 980 Pro 2TB Mobo: MPG Z790 EDGE WIFI Windows 11 Pro Main PC used for everything.","title":"Hardware"}] \ No newline at end of file diff --git a/posts/index.html b/posts/index.html index 6ee72e8..66e9017 100644 --- a/posts/index.html +++ b/posts/index.html @@ -169,7 +169,7 @@

I wanted to create an SPL token and after looking online I couldn’t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain....

- + diff --git a/posts/spl-token-cli/index.html b/posts/spl-token-cli/index.html index 7083d5e..ecdbbb1 100644 --- a/posts/spl-token-cli/index.html +++ b/posts/spl-token-cli/index.html @@ -99,8 +99,8 @@ "keywords": [ "Homelab" ], - "articleBody": "I wanted to create an SPL token and after looking online I couldn’t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain. Everything is done from the CLI.\nThis guide just covers the basics, the tools used are way more powerful than what I use them for, this is just creating a basic token with no taxes or locked supply or anything complex, but these tools do support those options. If you are interested in doing more I would read the proper documentation.\nhttps://docs.solanalabs.com/cli/install https://metaboss.rs/overview.html https://spl.solana.com/token NetworkChuck has a video from late 2021 on doing this, but some commands are a bit outdated, and Solana updated their entire metadata process in 2022.\nI am using an Ubuntu 22.04 VM with 60GB storage to run these commands.\nStarting balance: 0.079975 SOL Ending balance: 0.05731652 SOL Total SOL cost: 0.02265848 SOL ($4.22 on 3/15/2024) Installing Solana Tools First we need to download Solana tools to our system:\nsh -c \"$(curl -sSfL https://release.solana.com/stable/install)\" then run the export path command that is given to you:\nexport PATH=\"/home/mafyuh/.local/share/solana/install/active_release/bin:$PATH\" Restart your terminal session.\nCreating Wallet We will create a new SOL wallet to fund our token. To do this run:\nsolana-keygen new You don’t have to put a passphrase if you don’t want to. I would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time.\nKeep note of the keypair directory for later step.\nCheck your SOL balance with:\nsolana balance Install Rust We need Rust in order to create the token, to install Rust run:\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh Press enter for default installation. Once completed, restart your session again.\nThen we need to install some needed packages:\nsudo apt install libudev-dev llvm libclang-dev libssl-dev pkg-config build-essential protobuf-compiler -y Install spl-token-cli Now using Rust we are gonna install Solana’s CLI tools, this will take a few minutes.\ncargo install spl-token-cli Create Token Creating a new token is simple, make sure your wallet is funded with SOL and just run:\nspl-token create-token Your token’s address will be printed on screen. You will use this address in pretty much all the rest of the steps so keep handy.\nNote this creates a 9 decimal token, with no extensions, if you want to change this and add complexity to the token check out this\nIf you want to create a token with different that 9 decimals use:\nspl-token create-token --decimals \u003c# of decimals\u003e For a list of all things you can do run:\nspl-token create-token --help Now we need to create a token account for this token:\nspl-token create-account Example:\nspl-token create-account 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr If you get errors like:\n“unable to confirm transaction. This can happen in situations such as transaction expiration and insufficient fee-payer funds”\nYou just need to retry a few times, it will eventually go thru but sometimes takes 3-4 runs.\nMinting Tokens Now that you have a token and an account for the token, you can actually mint some tokens. To do this run:\nspl-token mint \u003c# of tokens\u003e Example:\nspl-token mint 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 CkaGbdriXVMHtzFBPtnpDjQvZ9gM9bwd8XdTTKR2Wx32 To see your tokens you can run:\nspl-token accounts Now you will want to send these tokens to a new address, so make a new wallet and get its pubkey, then to send these tokens run:\nspl-token transfer --fund-recipient --allow-unfunded-recipient \u003c# of tokens\u003e Example:\nspl-token transfer --fund-recipient --allow-unfunded-recipient 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 2DDyEt5N4y77ETWhhUmkZiympQbpjkfrt8FcMKhB1iWU Installing Metaboss Once this completes you can install metaboss which is needed to upload metadata. You can try to use spl-token built in metadata uploader as well, using –enable-metadata and initialize-metadata during token creation, but I couldn’t get this to work. Metaboss worked 1st try, again, this takes some time:\ncargo install metaboss Arweave/Github While we wait on metaboss to install, we should start uploading our tokens Logo to a cloud provider, I use Arweave in this example but you can use anything really. There are also many ways to upload to arweave so this is just a friendly example thats free.\nFirst create an account at https://akord.com/use-arweave Upload your image to a new vault. (PNG) Click on the information icon next to your image and copy the arweave.net URL. (Not under Share) We need this for our JSON file we will create next.\nNow you can create a json file, and in it paste the following:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"description\": \"Description of token\", \"image\": \"https://arweave.net/image-url-from-above\" } If you want metadata extensions use:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"description\": \"Small description of your token.\", \"image\": \"https://arweave.net/image-url-from-above\", \"extensions\": { \"website\": \"\", \"twitter\": \"\", \"telegram\": \"\" } } Now save this file with .json extension and upload it to Arweave just like the image. Now we need this JSON file’s Arweave link. Copy it from akord and create a new json file in your Solana server’s working directory. Fill in the following:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"uri\": \"https://arweave.net/json-file-arweave-url\" } Using the JSON file’s Arweave link as the URI. Name this file metadata.json.\nIf you are using Github, just make a new repo, upload the json file and image, copy the RAW url. URL should look like https://raw.githubusercontent.com/xxxxxxx. Probably easier to use Github tbh, especially if you don’t even know what Arweave is.\nCreating Metadata First we need to update our RPC URL, to set to mainnet run:\nsolana config set --url https://api.mainnet-beta.solana.com --keypair /home/mafyuh/.config/solana/id.json Filling in your keypair directory from earlier.\nNow that metaboss is installed, we just need to run 1 command to create our tokens metadata, again it may take a few tries:\nmetaboss create metadata -a -m metadata.json You should be able to go to solscan and see your updated metadata! It should appear in the SOL wallets soon after.\nUpdating Metadata If you ever need to update your metadata, you can do so by running:\nmetaboss update uri --keypair /home/mafyuh/.config/solana/id.json --account --new-uri https://arweave.net/new-arweave-json-url or you can just edit your existing json file.\nBONUS Creating a Market Now that you have a coin ready to go, you probably wanna get it listed so others can buy, I’ll try to make this process as cheap and easy as possible. Thanks to this Reddit post for finding these values.\nYou need to connect your wallet and have the tokens in the wallet that is connected for this to work, so either restore your private key or send tokens to your wallet on PC.\nNote I would not create this small of a market for a production coin, as what you are paying for when creating a market is essentially space on the blockchain for all your transactions. Long term projects should certainly not pay this little for a market, probably only good for smaller meme coins. If you are planning a long-term project you should probably be paying a few SOL for your market fee.\nRaydium has some good docs on how to create a market and pool, I would review these docs as well.\nFirst go to https://openbook-explorer.xyz/market/create Click Existing under mints Base Mint: Your token address Quote Mint: So11111111111111111111111111111111111111112 (this is swapping for SOL) Under Mints , since by default our token was 9 decimals, we will set these values Min Order size: 0.1 Price Tick: 0.99999998 or 0.99999999 Under advanced options check use advanced options. (this is what we are paying for, if long-term pay the 2.78 SOL) Event Queue Length: 128 Request Queue Length: 63 Orderbook Length: 201 At this time the cost to create this market is 0.32 SOL. Keep note of the market address.\nBONUS Creating Pool Now that we have a market, we need to create a pool. I’ve found Raydium to be the cheapest fee, but I would not cheap out on how much SOL you delegate to the pool as this is gonna be your liquidity, and having almost no liquidity is gonna be big red flag. But I have in the past just delegated .1 SOL and it worked, but trust me this is not gonna work out well.\nFirst go to https://raydium.io/liquidity/create/ Connect Wallet Paste Market ID Under Price and initial liquidity What we are doing here is setting our tokens starting price, the amount of tokens you put in the pool at the start decides how much they’re worth compared to SOL. All your tokenomics and things like this should probably already be done at this point, unless you’re just YOLO’ing it like I did. This is by far the most costly part of the process. Set a certain start time if you want. Hit Initialize Liquidity Pool and confirm in your wallet. The total fee currently is .68 SOL to create this pool.\nYou will recieve all the LP tokens in your wallet.\nBurning LP/ Revoke Authority You will probably want to burn these LP tokens so buyers won’t be scared off. There are many ways to do this, you can use the cli using this command:\nspl-token burn You can get the address on Solscan. Some wallets like solflare allow you to burn tokens thru the wallet. Or you can use online services like https://sol-incinerator.com/\nYou will also want to revoke mint authority as well as freeze authority by running:\nspl-token authorize freeze --disable And for mint authority:\nspl-token authorize mint --disable If you want to get your price to show on the wallets, you need to get listed on CoinGecko. There’s a bunch of requirements, to apply here is a link.\nTo get listed on Jupiter, they will automatically list your token once it hits some benchmarks which can be found here\nNow you just need to start your social media campaigns and best of luck! You can send your boy some of your tokens as thanks @ 3RYPrKxC6BNv3XUMf8Cyjg36pw6Qu1txRvqq6LNq9Psj\nTotal in Fees: 1 SOL (plus your liquidity)\nHope this guide has helped you save some $ when creating your Solana tokens!\n", - "wordCount" : "1708", + "articleBody": "I wanted to create an SPL token and after looking online I couldn’t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain. Everything is done from the CLI.\nThis guide just covers the basics, the tools used are way more powerful than what I use them for, this is just creating a basic token with no taxes or locked supply or anything complex, but these tools do support those options. If you are interested in doing more I would read the proper documentation.\nhttps://docs.solanalabs.com/cli/install https://metaboss.rs/overview.html https://spl.solana.com/token NetworkChuck has a video from late 2021 on doing this, but some commands are a bit outdated, and Solana updated their entire metadata process in 2022.\nI am using an Ubuntu 22.04 VM with 60GB storage to run these commands.\nStarting balance: 0.079975 SOL Ending balance: 0.05731652 SOL Total SOL cost: 0.02265848 SOL ($4.22 on 3/15/2024) Installing Solana Tools First we need to download Solana tools to our system:\nsh -c \"$(curl -sSfL https://release.solana.com/stable/install)\" then run the export path command that is given to you:\nexport PATH=\"/home/mafyuh/.local/share/solana/install/active_release/bin:$PATH\" Restart your terminal session.\nCreating Wallet We will create a new SOL wallet to fund our token. To do this run:\nsolana-keygen new --derivation-path \"m/44'/501'/0'/0'\" --force --no-bip39-passphrase Credit to u/nel0_angel0 on finding the –derivation-path flag\nI would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time. It’s best to restore this private key in your wallet on PC/phone. (Phantom, Solflare, etc)\nKeep note of the keypair directory for later step.\nCheck your SOL balance with:\nsolana balance Install Rust We need Rust in order to create the token, to install Rust run:\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh Press enter for default installation. Once completed, restart your session again.\nThen we need to install some needed packages:\nsudo apt install libudev-dev llvm libclang-dev libssl-dev pkg-config build-essential protobuf-compiler -y Install spl-token-cli Now using Rust we are gonna install Solana’s CLI tools, this will take a few minutes.\ncargo install spl-token-cli Create Token Creating a new token is simple, make sure your wallet is funded with SOL and just run:\nspl-token create-token Your token’s address will be printed on screen. You will use this address in pretty much all the rest of the steps so keep handy.\nNote this creates a 9 decimal token, with no extensions, if you want to change this and add complexity to the token check out this\nIf you want to create a token with different that 9 decimals use:\nspl-token create-token --decimals \u003c# of decimals\u003e For a list of all things you can do run:\nspl-token create-token --help Now we need to create a token account for this token:\nspl-token create-account Example:\nspl-token create-account 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr If you get errors like:\n“unable to confirm transaction. This can happen in situations such as transaction expiration and insufficient fee-payer funds”\nYou just need to retry a few times, it will eventually go thru but sometimes takes 3-4 runs.\nMinting Tokens Now that you have a token and an account for the token, you can actually mint some tokens. To do this run:\nspl-token mint \u003c# of tokens\u003e Example:\nspl-token mint 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 CkaGbdriXVMHtzFBPtnpDjQvZ9gM9bwd8XdTTKR2Wx32 To see your tokens you can run:\nspl-token accounts Now if you want to send these tokens to a new address, just run:\nspl-token transfer --fund-recipient --allow-unfunded-recipient \u003c# of tokens\u003e Example:\nspl-token transfer --fund-recipient --allow-unfunded-recipient 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 2DDyEt5N4y77ETWhhUmkZiympQbpjkfrt8FcMKhB1iWU This won’t be needed if you restored your private key in your wallet.\nInstalling Metaboss Once this completes you can install metaboss which is needed to upload metadata. You can try to use spl-token built in metadata uploader as well, using –enable-metadata and initialize-metadata during token creation, but I couldn’t get this to work. Metaboss worked 1st try, again, this takes some time:\ncargo install metaboss Arweave/Github While we wait on metaboss to install, we should start uploading our tokens Logo to a cloud provider, I use Arweave in this example but you can use anything really. There are also many ways to upload to arweave so this is just a friendly example thats free.\nFirst create an account at https://akord.com/use-arweave Upload your image to a new vault. (PNG) Click on the information icon next to your image and copy the arweave.net URL. (Not under Share) We need this for our JSON file we will create next.\nNow you can create a json file, and in it paste the following:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"description\": \"Description of token\", \"image\": \"https://arweave.net/image-url-from-above\" } If you want metadata extensions use:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"description\": \"Small description of your token.\", \"image\": \"https://arweave.net/image-url-from-above\", \"extensions\": { \"website\": \"\", \"twitter\": \"\", \"telegram\": \"\" } } Now save this file with .json extension and upload it to Arweave just like the image. Now we need this JSON file’s Arweave link. Copy it from akord and create a new json file in your Solana server’s working directory. Fill in the following:\n{ \"name\": \"TOKEN_NAME\", \"symbol\": \"SYM\", \"uri\": \"https://arweave.net/json-file-arweave-url\" } Using the JSON file’s Arweave link as the URI. Name this file metadata.json.\nIf you are using Github, just make a new repo, upload the json file and image, copy the RAW url. URL should look like https://raw.githubusercontent.com/xxxxxxx. Probably easier to use Github tbh, especially if you don’t even know what Arweave is.\nCreating Metadata First we need to update our RPC URL, to set to mainnet run:\nsolana config set --url https://api.mainnet-beta.solana.com --keypair /home/mafyuh/.config/solana/id.json Filling in your keypair directory from earlier.\nNow that metaboss is installed, we just need to run 1 command to create our tokens metadata, again it may take a few tries:\nmetaboss create metadata -a -m metadata.json You should be able to go to solscan and see your updated metadata! It should appear in the SOL wallets soon after.\nUpdating Metadata If you ever need to update your metadata, you can do so by running:\nmetaboss update uri --keypair /home/mafyuh/.config/solana/id.json --account --new-uri https://arweave.net/new-arweave-json-url or you can just edit your existing json file.\nBONUS Creating a Market Now that you have a coin ready to go, you probably wanna get it listed so others can buy, I’ll try to make this process as cheap and easy as possible. Thanks to this Reddit post for finding these values.\nYou need to connect your wallet and have the tokens in the wallet that is connected for this to work, so either restore your private key or send tokens to your wallet on PC.\nNote I would not create this small of a market for a production coin, as what you are paying for when creating a market is essentially space on the blockchain for all your transactions. Long term projects should certainly not pay this little for a market, probably only good for smaller meme coins. If you are planning a long-term project you should probably be paying a few SOL for your market fee.\nRaydium has some good docs on how to create a market and pool, I would review these docs as well.\nFirst go to https://openbook-explorer.xyz/market/create Click Existing under mints Base Mint: Your token address Quote Mint: So11111111111111111111111111111111111111112 (this is swapping for SOL) Under Mints , since by default our token was 9 decimals, we will set these values Min Order size: 0.1 Price Tick: 0.99999998 or 0.99999999 Under advanced options check use advanced options. (this is what we are paying for, if long-term pay the 2.78 SOL) Event Queue Length: 128 Request Queue Length: 63 Orderbook Length: 201 At this time the cost to create this market is 0.32 SOL. Keep note of the market address.\nBONUS Creating Pool Now that we have a market, we need to create a pool. I’ve found Raydium to be the cheapest fee, but I would not cheap out on how much SOL you delegate to the pool as this is gonna be your liquidity, and having almost no liquidity is gonna be big red flag. But I have in the past just delegated .1 SOL and it worked, but trust me this is not gonna work out well.\nFirst go to https://raydium.io/liquidity/create/ Connect Wallet Paste Market ID Under Price and initial liquidity What we are doing here is setting our tokens starting price, the amount of tokens you put in the pool at the start decides how much they’re worth compared to SOL. All your tokenomics and things like this should probably already be done at this point, unless you’re just YOLO’ing it like I did. This is by far the most costly part of the process. Set a certain start time if you want. Hit Initialize Liquidity Pool and confirm in your wallet. The total fee currently is .68 SOL to create this pool.\nYou will recieve all the LP tokens in your wallet.\nBurning LP/ Revoke Authority You will probably want to burn these LP tokens so buyers won’t be scared off. There are many ways to do this, you can use the cli using this command:\nspl-token burn You can get the address on Solscan. Some wallets like solflare allow you to burn tokens thru the wallet. Or you can use online services like https://sol-incinerator.com/\nYou will also want to revoke mint authority as well as freeze authority by running:\nspl-token authorize freeze --disable And for mint authority:\nspl-token authorize mint --disable If you want to get your price to show on the wallets, you need to get listed on CoinGecko. There’s a bunch of requirements, to apply here is a link.\nTo get listed on Jupiter, they will automatically list your token once it hits some benchmarks which can be found here\nNow you just need to start your social media campaigns and best of luck! You can send your boy some of your tokens as thanks @ 3RYPrKxC6BNv3XUMf8Cyjg36pw6Qu1txRvqq6LNq9Psj\nTotal in Fees: 1 SOL (plus your liquidity)\nHope this guide has helped you save some $ when creating your Solana tokens!\n", + "wordCount" : "1723", "inLanguage": "en", "datePublished": "2024-03-15T00:13:40Z", "dateModified": "2024-03-15T00:13:40Z", @@ -194,7 +194,7 @@

How to create a Solana Token (SPL) from CLI with metadata

-
March 15, 2024 · 9 min · 1708 words · Matt +
@@ -246,8 +246,9 @@

Restart your terminal session.

Creating Wallet

We will create a new SOL wallet to fund our token. To do this run:

-
solana-keygen new
-

You don’t have to put a passphrase if you don’t want to. I would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time.

+
solana-keygen new --derivation-path "m/44'/501'/0'/0'" --force --no-bip39-passphrase
+

Credit to u/nel0_angel0 on finding the –derivation-path flag

+

I would backup your recovery seed phrase and take note of the public address. I would fund this wallet with some SOL as well at this time. It’s best to restore this private key in your wallet on PC/phone. (Phantom, Solflare, etc)

Keep note of the keypair directory for later step.

Check your SOL balance with:

solana balance
@@ -283,11 +284,12 @@
 
spl-token mint 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 CkaGbdriXVMHtzFBPtnpDjQvZ9gM9bwd8XdTTKR2Wx32
 

To see your tokens you can run:

spl-token accounts
-

Now you will want to send these tokens to a new address, so make a new wallet and get its pubkey, then to send these tokens run:

+

Now if you want to send these tokens to a new address, just run:

spl-token transfer --fund-recipient --allow-unfunded-recipient <TOKEN_ADDRESS> <# of tokens> <NEW_ADDRESS>
 

Example:

spl-token transfer --fund-recipient --allow-unfunded-recipient 7njsg9BA1xvXX9DNpe5fERHK4zb7MbCHKZ6zsx5k3adr 1000000 2DDyEt5N4y77ETWhhUmkZiympQbpjkfrt8FcMKhB1iWU
-

Installing Metaboss

+

This won’t be needed if you restored your private key in your wallet.

+

Installing Metaboss

Once this completes you can install metaboss which is needed to upload metadata. You can try to use spl-token built in metadata uploader as well, using –enable-metadata and initialize-metadata during token creation, but I couldn’t get this to work. Metaboss worked 1st try, again, this takes some time:

cargo install metaboss
 

Arweave/Github

diff --git a/tags/homelab/index.html b/tags/homelab/index.html index f25299c..5e7752e 100644 --- a/tags/homelab/index.html +++ b/tags/homelab/index.html @@ -154,7 +154,7 @@

I wanted to create an SPL token and after looking online I couldn’t find an updated guide. I mainly just found Keyglowmax (SCAM). So I thought I would learn and share. There are much easier ways to create these tokens but they cost $ and spending more $ than needed is no fun. They also have you connect your wallet which is enough of a worry. This guide costs as little SOL as possible as everything is transacted directly on-chain....

-
March 15, 2024 · 9 min · 1708 words · Matt
+
March 15, 2024 · 9 min · 1723 words · Matt