<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Blog]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://pierewoehl.de/</link><generator>Ghost 5.86</generator><lastBuildDate>Fri, 15 May 2026 12:17:31 GMT</lastBuildDate><atom:link href="https://pierewoehl.de/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner]]></title><description><![CDATA[<p>!!! Der Text wurde mit hilfe von Copilot erstellt als probe, viele teile habe ich aber ueberarbeitet. !!!</p><p>Wenn du einen Dedicated Server bei Hetzner mieten m&#xF6;chtest, dich Proxmox VE und IPv6 faszinieren und du Lust auf ein bisschen T&#xFC;fteln hast, bist du hier genau richtig. In diesem</p>]]></description><link>https://pierewoehl.de/einrichten-eines-dedicated-servers-mit-proxmox-und-ipv6-auf-hetzner/</link><guid isPermaLink="false">67bf757b9534e91be5ff2a03</guid><category><![CDATA[hetzner]]></category><category><![CDATA[ipv6]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[dedicated]]></category><category><![CDATA[server]]></category><category><![CDATA[opnsense]]></category><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Wed, 26 Feb 2025 21:13:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1521106047354-5a5b85e819ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDU0fHxjb21wdXRlciUyMG5ldHdvcmt8ZW58MHx8fHwxNzQwNjA0MjYwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1521106047354-5a5b85e819ee?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDU0fHxjb21wdXRlciUyMG5ldHdvcmt8ZW58MHx8fHwxNzQwNjA0MjYwfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner"><p>!!! Der Text wurde mit hilfe von Copilot erstellt als probe, viele teile habe ich aber ueberarbeitet. !!!</p><p>Wenn du einen Dedicated Server bei Hetzner mieten m&#xF6;chtest, dich Proxmox VE und IPv6 faszinieren und du Lust auf ein bisschen T&#xFC;fteln hast, bist du hier genau richtig. In diesem Blogartikel zeige ich dir, wie du einen Server mit Proxmox VE einrichtest und IPv6 im Routing-Modus mit OPNsense konfigurierst. Wir orientieren uns dabei an der Anleitung auf der Hetzner-Community-Seite: <a href="https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve?ref=pierewoehl.de#example-of-the-host-configuration-in-routed-setup">Hetzner Proxmox VE Tutorial</a>.</p><p>Ich habe mir einen dedicated server nur mit ipv6 geholt, keine ipv4 dabei</p><h4 id="schritt-1-installation-von-proxmox-ve">Schritt 1: Installation von Proxmox VE</h4><p>Um Proxmox bei Hetzner installieren zu koennen fragst du entweder per ticket ob sie dir ein USB stick mit an die Remote Konsole anschliessen oder du kannst theoretisch auch auf einem debian das von hand installieren ueber die repos.</p><p>Ich habe den ersten weg genommen.</p><h4 id="schritt-2-ipv6-und-pr%C3%A4fix-korrekt-konfigurieren">Schritt 2: IPv6 und Pr&#xE4;fix korrekt konfigurieren</h4><p>Um sicherzustellen, dass dein Netzwerk und die OPNsense-VM reibungslos funktionieren, musst du das IPv6-Pr&#xE4;fix korrekt konfigurieren.</p><p>Du musst dein prefix in vier elemente aufteilen:</p><ol><li>Du brauchst eine statische IP fuer den Proxmox</li><li>Du brauchst eine statische IP fuer das interne VM Routing interface</li><li>Du brauchst eine statische IP fuer die opnsense</li><li>Ein Prefix fuer opnsense welches es verteilen kann innerhalb der prefix delegation von IP2</li></ol><p>IP 1 muss sich in einem anderen prefix befinden als IP2 und IP3, und die letzten beiden muessen im selben Scope liegen als im selben Prefix. Das wird gleich klarer wenn wir zu den bridges kommen.</p><p>Ich habe mit Proxmox SDN und dnsmasq leider kein DHCPv6 ans laufen bekommen daher die OPNSense, sonst braeuchte man die auch nicht.</p><h4 id="schritt-3-netzwerkkonfiguration-im-routing-modus">Schritt 3: Netzwerkkonfiguration im Routing-Modus</h4><p>Jetzt geht&apos;s ans Eingemachte: Die Netzwerkkonfiguration.</p><ol><ul><li>&#xD6;ffne die Datei <code>/etc/network/interfaces</code> auf deinem Proxmox-Server und f&#xFC;ge die folgenden Eintr&#xE4;ge hinzu:</li></ul><ul><li>Stelle sicher, dass die IPv6-Adresse und das Gateway entsprechend der Hetzner-Anleitung konfiguriert sind.</li></ul></ol><p>Hetzner hat sein Gateway immer auf fe80::1 was aber durch das Router Advertisement kommt, standard IP fuer ipv6 Gateways.</p><p><strong>Routing aktivieren</strong>: Um IPv6 im Routing-Modus zu betreiben, bearbeite die Datei <code>/etc/sysctl.conf</code> und f&#xFC;ge folgende Zeile hinzu:</p><pre><code class="language-plaintext">net.ipv6.conf.all.forwarding=1</code></pre><p>Aktiviere dies mit <code>sysctl -p</code></p><p><strong>Host-Konfiguration</strong>:</p><pre><code class="language-plaintext">auto vmbr0
iface vmbr0 inet6 static
        address 2a01:4f8:XXXX:XXXX::2/128
        gateway fe80::1
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
#Public Internet

auto vmbr1
iface vmbr1 inet6 static
        address 2a01:4f8:XXXX:XXXX:a::1/80
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        post-up ip route add 2a01:4f8:XXXX:XXXX:a:a::/96 via 2a01:4f8:XXXX:XXXX:a::2
#Routed Subnet for OPNSense</code></pre><p>Zur erklaerung:</p><p>Ein <code>/128</code> spezifiziert nur eine einzige IP-Adresse. Daher reserviere ich f&#xFC;r Proxmox selbst auf der Standard-Bridge <code>vmbr0</code> diese IP zur Kommunikation via SSH, f&#xFC;r Updates, etc., und verwende sie somit als Management-Interface. Die zweite Bridge ist das Routing-Interface. Hier setze ich ein <code>a</code> in den f&#xFC;nften Block, der Rest wird mit Nullen gef&#xFC;llt. Als Pr&#xE4;fix nutze ich die <code>80</code>, wodurch alles in Block 6-8 unterhalb von <code>a</code> an die Bridge delegiert wird.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="561" height="514"></figure><p>Das <code>/64</code>-Pr&#xE4;fix ist der vollst&#xE4;ndige Pr&#xE4;fix von Hetzner, der immer bis zum vierten Block reicht. Der Pr&#xE4;fix entspricht der Maske wie bei IPv4 und teilt dem System mit, welcher Bereich unver&#xE4;nderlich zur Verbindung geh&#xF6;rt und welche IPs das Netz nutzen kann.</p><p>Ich gehe also 16 Bits weiter, da ein Block 16 Bits entspricht&#x2014;nur eben in hexadezimaler Darstellung. Ein Zeichen repr&#xE4;sentiert die Zahl eines bin&#xE4;ren Blocks von 0 bis f, was 16 Zeichen entspricht. <code>F</code> ist, wenn alle 4 Bits auf 1 stehen, in einem der 4 bin&#xE4;ren Bit-Bl&#xF6;cke, einfach die 4 in ihrer 16-Zeichen-Darstellung aufgeschrieben.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-1.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="564" height="502"></figure><p>Und nach diesem exkurs gebe ich eine Route mit einem 96 Prefix an (64+16+16) und dort mit der gateway IP zu dem opnsense netz.</p><h3 id="schritt-3-opnsense">Schritt 3: OPNSense</h3><p>Installiere nun eine opnsense VM in den proxmox und gebe ihm die vmbr1 als WAN und fuer LAN habe ich ein SDN vnet genommen.</p><p>in OPNSense konfigurierst du nun den routed 96 prefix und aber als Gateway die Bridge IP anstelle von fe80::1.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-2.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="874" height="173" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-2.png 600w, https://pierewoehl.de/content/images/2025/02/image-2.png 874w" sizes="(min-width: 720px) 720px"></figure><p>Wir schalten Router Advertisement ein!</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-3.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="1532" height="893" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-3.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/02/image-3.png 1000w, https://pierewoehl.de/content/images/2025/02/image-3.png 1532w" sizes="(min-width: 720px) 720px"></figure><p>Und konfigurieren den DHCPv6 und noch Unbound</p><p>Da ich nie mehr als 1000 VMs auf der kiste haben werde reicht es wenn ich einfach nur 4.096 ips auswaehle.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-10.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="2000" height="1145" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-10.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/02/image-10.png 1000w, https://pierewoehl.de/content/images/size/w1600/2025/02/image-10.png 1600w, https://pierewoehl.de/content/images/2025/02/image-10.png 2162w" sizes="(min-width: 720px) 720px"></figure><p>Unbound DNS hat auch eine kleine konfiguration</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-5.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="2000" height="753" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-5.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/02/image-5.png 1000w, https://pierewoehl.de/content/images/size/w1600/2025/02/image-5.png 1600w, https://pierewoehl.de/content/images/size/w2400/2025/02/image-5.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Ich habe neben den hetzner DNS servern von System fuer github einen sog. NAT64 DNS Server hinterlegt, weil github kann kein IPv6.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-9.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="1074" height="674" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-9.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/02/image-9.png 1000w, https://pierewoehl.de/content/images/2025/02/image-9.png 1074w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/02/image-8.png" class="kg-image" alt="Einrichten eines Dedicated Servers mit Proxmox und IPv6 auf Hetzner" loading="lazy" width="1062" height="673" srcset="https://pierewoehl.de/content/images/size/w600/2025/02/image-8.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/02/image-8.png 1000w, https://pierewoehl.de/content/images/2025/02/image-8.png 1062w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS…]]></title><description><![CDATA[In this article you will see how you can use Ingest Actions in splunk to redirect data into S3-compatible storage. And how to retrieve it…]]></description><link>https://pierewoehl.de/using-ingest-actions-in-splunk-to-export-to-non-aws-s3-and-how-to-import-the-data-back-from-non-aws/</link><guid isPermaLink="false">6676d858f880cf131584ea19</guid><category><![CDATA[splunk]]></category><category><![CDATA[s3]]></category><category><![CDATA[data]]></category><category><![CDATA[hec]]></category><category><![CDATA[python]]></category><category><![CDATA[conda]]></category><category><![CDATA[pip]]></category><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Wed, 26 Feb 2025 19:42:23 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGRhdGElMjBmbG93fGVufDB8fHx8MTc0MDU5ODkyNHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h3 id="using-ingest-actions-in-splunk-to-export-to-non-aws-s3-and-how-to-import-the-data-back-from-non-aws-s3">Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS S3</h3><img src="https://images.unsplash.com/photo-1488229297570-58520851e868?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGRhdGElMjBmbG93fGVufDB8fHx8MTc0MDU5ODkyNHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;"><p>In this article you will see how you can use Ingest Actions in splunk to redirect data into S3-compatible storage. And how to retrieve it back from S3 into the splunk index.</p><p>Wait you might say, there is a AWS Add-On which is capable of ingesting from S3 you might say ? Yes and no&#x2026;.<br>It only is compatible to AWS S3 implementations, so the UI elements are restricted to AWS stuff.</p><p>So I had to develop my own. How ? HEC !</p><p>HEC ? &#x2026; HTTP Event Collector</p><p>If you just want the script, go ahead, skip my walkthrough and learning path and get what you searched for, down below you can find the script</p><p>So now for the people who are willing to read throug my thought process.</p><h3 id="back-story">Back story</h3><p>A bit of a back story, you see if you do work with an API which is practical to implement into the REST API Modular Input from BaboonBones this is easy, but if you are not able to do so, you have to write you own script to get the data.</p><p>But you want it also in splunk, so how ? yeah you create an HTTP Event Collector, you can create basically an API token to post data right into the index queue and hope for the best.</p><p>No you have to format it in a specific way, ok you can just send the event field, but that is no fun.</p><p>What you really want to do is to send an event with all the original information you expect it to have, just when you would have collected it through the universal forwarder sooo:</p><ul><li>time</li><li>source</li><li>sourcetype</li><li>host</li><li>event</li></ul><p>Now you have context.</p><p>So I knew that I can ingest data through an HEC, so I need to figure out how to connect to S3 and retrieve the data, and first check if this is at all usable data to work with, what splunk exports onto the S3 bucket.</p><p>But first lets get some data to go from the index queue out to the S3 bucket.</p><h3 id="configure-ingest-actions-to-route-data-to-s3-destination">Configure Ingest actions to route data to S3 destination</h3><p>Starting with the destination, you can configure it on the indexer manager, or on a search head connected, I assume you have the correct permissions going forward.</p><ol><li>So you select Settings &gt; Ingest Actions under DATA</li><li>Now you select Destinations</li><li>You Add a Destination</li><li>Now enter a unique name for the destination, I call it like s3-$sourcetype-$usecase</li><li>Enter your S3 Endpoint URL, just type it in, the Dropdown is not a restriction, it states it also there that you can enter anything</li><li>Enter your S3 Bucket, Folder (I set it to the name of the destination)</li><li>Enter your Access ID und Secret Key</li><li>Check with Test connection.</li></ol><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-hftremtyvjsagcyz4gem2q.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="586" height="1196"></figure><p>Please make sure under Advanced Options to disable Compression and enable the JSON format, so you can read it properly</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-6rr2e9cd0bzrenm84jofpw.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="554" height="450"></figure><p>So you now have a Destination.</p><p>Now go into Rulesets, click Add Rule</p><ul><li>Give it a name, I did also like s3-$sourcetype-$usecase</li><li>Then select the sourcetype you would like to intercept</li><li>Select a sample size to design the filter rule</li><li>Click on Apply</li><li>Then add a rule &#x201C;Route to Destination&#x201D;</li><li>Select EVAL and create a rule like</li></ul><pre><code>source=&quot;WinEventLog:Security&quot; AND match(_raw,&quot;&lt;EventID&gt;4624&lt;/EventID&gt;&quot;)</code></pre><ul><li>Now remove &#x201C;Default Destination&#x201D;</li><li>Select your specified new Destination for S3</li><li>Your are done, now all Events matching that rule are now setup to be redirect to S3 and not ingested</li></ul><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-g41igwzsi0hickyfiyahyw.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="497" height="602"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-5jhmh9fahyipanyenzm7aw.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="468" height="48"></figure><h3 id="ingesting-back-into-splunk">Ingesting back into splunk</h3><p>So now have a look onto S3 and view the files created.</p><p>As you see that out of luck your json files are just lists of event data formatted in a specific format.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-u3jah7qb-egipd23cz2mtg.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="800" height="152" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-u3jah7qb-egipd23cz2mtg.png 600w, https://pierewoehl.de/content/images/2024/06/1-u3jah7qb-egipd23cz2mtg.png 800w" sizes="(min-width: 720px) 720px"></figure><p>Which format? when you read carefully you see that the data is right formatted like the object a HEC expects when posted onto it.</p><p>Sooooo lets begin and have fun!</p><p>// Btw. this looks easy now but it took a whole day to figure out</p><p>First I used python, because splunk is python and I may implement into a custom add-on.</p><p>So for working with with S3 on python AWS provides the boto3 module through pip, In my case I used conda but pip is fine.</p><p>I give you the complete list here because the rest are just modules you need anyway, no magic to unpack here</p><pre><code class="language-bash">conda install boto3, requests</code></pre><h4 id="connect-to-s3-with-boto3">Connect to S3 with boto3</h4><p>First you need to import boto3, duh. Then you connect an s3 resource and get your bucket selected</p><pre><code class="language-python">import boto3 
s3 = boto3.resource(&apos;s3&apos;, endpoint_url=&apos;https://s3.example.com&apos;, aws_access_key_id=&apos;ADBCDEF&apos;, aws_secret_access_key=&apos;GHIJKLMNOP&apos;) 
bucket = s3.Bucket(&apos;YOURBUCKET&apos;)</code></pre><p>You remember I have saved my data into a folder, but s3 does not understand folder, it understands prefixes .. path prefixes. So now on to getting you files from s3 loaded into python.</p><p>There is a object called objects on the Bucket object, it understands the .all() method and the .filter() method. to filter for any information we need yes, the filter method and our prefix.</p><pre><code class="language-python">bucket.objects.filter(Prefix=&apos;s3-$sourcetype-$usecase&apos;)</code></pre><p>This will give you every file in that prefix. We simply need to foreach it retrieve the json file and send it onto splunk, easy!</p><p>You might want to stop here, unfortunately boto3 or the S3 specification does not just stream you data into a variable, you need to specifically download it, but not save it, here is the catch ! SpooledTemporaryFile a function inside of tempfile. To use it you need to import tempfile and right now you can also import json and requests.</p><p>You need to specify how many bytes you are allowing it to save into memory before falling back to disk to save it, and that is 1MB, since I saw that the file are not really get bigger than that.</p><pre><code class="language-python">import requests 
import tempfile 
import json 
#[...] 
for object in bucket.objects.filter(Prefix=&apos;s3-$sourcetype-$usecase&apos;): 
  data = tempfile.SpooledTemporaryFile(max_size=1000000)</code></pre><p>Now you have a place to save the data into memory, to now retrieve it you need one of the two functions, download_file or download_fileobj.</p><p>download_file expect you to save it into a path, we do not want that.<br>download_fileobj does exactly what we want, it downloads the file as an object into an object.</p><pre><code class="language-python">bucket.download_fileobj(object.key, data)</code></pre><p>Now hooray we have the data, so we see that the data is json, so we load it and foreach over the items in the json list.</p><p>We know that we can just post it onto the HEC, so also lets do that. Keep in mind that we need to define the authorization header for splunk</p><pre><code class="language-python">headers = {&apos;Authorization&apos;: &apos;Splunk dd4680f4-bd33-426e-9ba6-f7ceff5761cd&apos;} 
events = json.loads(data.read()) 
for event in events:  
  requests.post(headers=headers, url=&apos;https://splunk.example.com&apos;, json=event)</code></pre><p>If we now execute that we see&#x2026;.<br>Nothing&#x2026;</p><p>As I found out after working through the SpooledTemporaryFile docs I found that since it is a datastream I need to set the handler back to the front, because we wrote into it it was at the back with no data behind the last byte.</p><p>We do that with:</p><pre><code class="language-python">data.seek(0)</code></pre><p>Thats it, now it works, let us check</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-i2fjihtffdr8ebyrhp8sza.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="800" height="92" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-i2fjihtffdr8ebyrhp8sza.png 600w, https://pierewoehl.de/content/images/2024/06/1-i2fjihtffdr8ebyrhp8sza.png 800w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-oizmyzp_crvgp8yvbdi3hg.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="699" height="97" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-oizmyzp_crvgp8yvbdi3hg.png 600w, https://pierewoehl.de/content/images/2024/06/1-oizmyzp_crvgp8yvbdi3hg.png 699w"></figure><p>Eureka, it works, now we can import that data into splunk as we need it, so we can redirect data not needed away into s3 and retrieve it when we need it.</p><p>Soo there is still something more to keep in mind. When now ingesting through the HEC, the event just looks like it comes from the original source, because the source is in the event data and we have not touched it.</p><p>moving forward you will see that it lands directly back again in s3, so we need to differentiate between data coming from the original source and the HEC.</p><p>So I do this, do identify the data and the file and bucket and so on it came from:</p><pre><code class="language-python">event[&apos;source&apos;] = object.key</code></pre><p>The key by the way is the filename of the object you currently have retrieved.</p><p>To now finally have the complete script you can go ahead</p><pre><code class="language-python">import boto3 
import requests 
import tempfile 
import json 
 
s3_endpoint = &apos;s3.example.com&apos; 
s3_bucket = &apos;bucket&apos; 
s3_prefix = &apos;s3-xmlwineventlog-event4568&apos; 
s3_access_key = &apos;ABDEFG&apos; 
s3_secret_key = &apos;HIJKLMNOP&apos; 
 
splunk_server = &apos;splunk.example.com&apos; 
splunk_token = &apos;dd4680f4-bd33-426e-9ba6-f7ceff5761cd&apos; 
 
headers = {&apos;Authorization&apos;: &apos;Splunk &apos; + splunk_token} 
 
s3 = boto3.resource(&apos;s3&apos;, endpoint_url=s3_endpoint, aws_access_key_id=s3_access_key, aws_secret_access_key=s3_secret_key) 
bucket = s3.Bucket(s3_bucket) 
 
for object in bucket.objects.filter(Prefix=s3_prefix): 
  data = tempfile.SpooledTemporaryFile(max_size=1000000) 
  data.seek(0) 
  bucket.download_fileobj(object.key, data) 
  events = json.loads(data.read()) 
 
  data.close() 
 
  for event in events:  
    event[&apos;source&apos;] = object.key 
    requests.post(headers=headers, url=splunk_server, json=event)</code></pre><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-f_sr3b5iuzajtuvmrvwzpq.png" class="kg-image" alt="Using Ingest actions in Splunk to export to non AWS S3 and how to import the data back from non AWS&#x2026;" loading="lazy" width="800" height="730" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-f_sr3b5iuzajtuvmrvwzpq.png 600w, https://pierewoehl.de/content/images/2024/06/1-f_sr3b5iuzajtuvmrvwzpq.png 800w" sizes="(min-width: 720px) 720px"></figure><p>In the next post on this topic I will go ahead and will show an optimized version of a splunk modular input</p>]]></content:encoded></item><item><title><![CDATA[Something about AppLocker]]></title><description><![CDATA[<p>When configuring AppLocker there are several things to think about. AppLocker itself a tool from Microsoft integrated into Windows to allow Administrators and also Microsoft itself to block applications based on three different identifiers.</p><p>The Identifiers are:</p><ul><li>Path</li><li>Hash</li><li>Publisher</li></ul><p>Microsoft does configure AppLocker with some Default rules if you</p>]]></description><link>https://pierewoehl.de/something-about-applocker/</link><guid isPermaLink="false">67521440384fcd1912b9c043</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sat, 04 Jan 2025 19:57:51 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1614064641938-3bbee52942c7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDk4fHxzZWN1cml0eXxlbnwwfHx8fDE3MzM0MzQ3MjZ8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1614064641938-3bbee52942c7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDk4fHxzZWN1cml0eXxlbnwwfHx8fDE3MzM0MzQ3MjZ8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Something about AppLocker"><p>When configuring AppLocker there are several things to think about. AppLocker itself a tool from Microsoft integrated into Windows to allow Administrators and also Microsoft itself to block applications based on three different identifiers.</p><p>The Identifiers are:</p><ul><li>Path</li><li>Hash</li><li>Publisher</li></ul><p>Microsoft does configure AppLocker with some Default rules if you wish to enable them.</p><p>The requirements to run AppLocker is at least Windows Pro edition.</p><p>Rules are enforced as a default, so if you apply rules then they are enforced, no rules mean nothing to enforce. So to first verify your rules please first set the AppLocker to Audit mode, this is done in the same place were you configure the rules itself.</p><h3 id="path">Path</h3><p>The Path rule checks for the path of an application, this can be easily circumvented if you can move or install an application outside of this specified path or filename.</p><h3 id="hash">Hash</h3><p>A Hash rule is exactly meant to just verify the hash of an application to be blocked and nothing else, this is the most direct and specific rule options. But you need to be aware that this is somewhat work intensive because you need to keep the hashes up to date if they ever change. </p><h3 id="publisher">Publisher</h3><p>The Publisher rule is the most powerful in my opinion.</p><p>You can filter for a specific publisher based on the vendors Code Signing certificate for an application. But you can also make use of wildcards, so you can allow or deny specific application vendors, products, executables and down to the version or ranges of versions.</p><h2 id="the-applocker-%E2%80%9Econtrol-panel%E2%80%9C">The AppLocker &#x201E;Control Panel&#x201C;</h2><p>AppLocker policies are configured in three locations but basically these are just two locations.</p><ul><li>Domain Group Policy Management</li><li>Local Group Policy</li><li>Local Security Policies</li></ul><p>The Local Group Policy and Security Policy do sync each other though. These Screenshots show you a brief overview of the options in the panel</p><h3 id="hidden-options">Hidden Options</h3><p>A hidden option which is hidden for a reason is the DLL Rule Collections.<br>This will degrade your systems performance and will audit every DLL to be loaded for an AppLocker Allow/Deny rule.</p><h2 id="applocker-processing-rules">AppLocker Processing Rules</h2><ul><li>A deny rule always overwrites an allow rule</li><li>If a rule exists even if it is an allow rule is a implicit deny for everything else</li><li>If a Exe Rule exists than all packaged Apps are blocked also until you allow the packaged app or allow every packaged app</li></ul><h2 id="applocker-event-log">AppLocker Event Log </h2><p>You can find the Event Log for AppLocker in the Event Viewer at: Applications and Services - Microsoft - Windows AppLocker</p><p>And then you have a Log per Category in AppLocker</p><h2 id="use-case-examples">Use Case Examples</h2><h3 id="blocking-a-specific-group-to-execute-an-application">Blocking a specific group to execute an application</h3><p>So you want to block a specific group to execute an application than you create a deny rule for the application targeted to the group, this is one of the easier things to do in AppLocker.</p><h3 id="allowing-a-specific-group-to-execute-an-application">Allowing a specific group to execute an application</h3><p>For example you want to deny users to execute well known browsers because you have a managed browser for your environment. Then you need to think out of the box because you can&#x2018;t just Deny the application and than allow it to a group, because a deny wins over an allow.</p><p>So how you do it ?</p><ol><li>You create a rule for Everyone to Allow Anything, this is done with a Path rule and the Path to be the wildcard symbol &#x201E;*&#x201C;.</li><li>You then go into the Rule and under Exceptions place every Application you want to block, this works because you make an exception to an allow which then is implicitly denied.</li><li>Now you create a Allow rule for the exact applications you just made an exception for in the Allow Everyone Anything rule and assign it to the group you wish to attach it to</li></ol><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/image.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="1362" height="876" srcset="https://pierewoehl.de/content/images/size/w600/2025/01/image.png 600w, https://pierewoehl.de/content/images/size/w1000/2025/01/image.png 1000w, https://pierewoehl.de/content/images/2025/01/image.png 1362w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/image-1.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="440" height="442"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/image-2.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="437" height="446"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/Screenshot-2025-01-04-205401.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="707" height="644" srcset="https://pierewoehl.de/content/images/size/w600/2025/01/Screenshot-2025-01-04-205401.png 600w, https://pierewoehl.de/content/images/2025/01/Screenshot-2025-01-04-205401.png 707w"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/Screenshot-2025-01-04-205413.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="441" height="435"></figure><h3 id="blocking-vulnerable-applications">Blocking vulnerable Applications</h3><p>Here we make use of the Publisher rule, you can basically set the rule to allow anything of this Publisher and Application but deny access to applications with a version number below a specific point, combined with hash rules you have a pretty powerful Method of blocking vulnerable applications for your users.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2025/01/image-3.png" class="kg-image" alt="Something about AppLocker" loading="lazy" width="444" height="450"></figure>]]></content:encoded></item><item><title><![CDATA[Microsoft Intune - Scoping Tags]]></title><description><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/O56ys_Jg2BQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Intune Scoping"></iframe></figure><p>Please note: This Video is in German</p><p>This blog is about how to use the scoping feature in Microsoft Intune. It lets you separate administrative tasks for specific roles, groups, or teams. For example, imagine a group of people managing Microsoft Teams Rooms devices. You might want to give them</p>]]></description><link>https://pierewoehl.de/microsoft-intune-scoping-tags/</link><guid isPermaLink="false">66786af3f335f046681733ed</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sun, 23 Jun 2024 18:56:02 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/O56ys_Jg2BQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Intune Scoping"></iframe></figure><p>Please note: This Video is in German</p><p>This blog is about how to use the scoping feature in Microsoft Intune. It lets you separate administrative tasks for specific roles, groups, or teams. For example, imagine a group of people managing Microsoft Teams Rooms devices. You might want to give them some capabilities to manage those devices without giving them access to all devices. With scoping, you can divide your Intune environment into different scopes, which is where the &quot;scope tag&quot; name comes from.</p><h2 id="so-here-is-how-you-do-it">So here is how you do it.</h2><p>First you go into Roles in your Tenant Administration of Intune, there you will find on the left side of the menu the Scope Tags configuration.</p><p>Before creating a scoping tag you need to do two things!</p><ol><li>Create a group which have the Administrators or People in them doing the actions</li><li>Create a dynamically assigned group for devices following a specific naming convention or category or so.</li></ol><p>This way you are setup for the scoping tags.</p><p>So, moving forward, you create the scoping tag and name it, then select the device group where you want the tag to be applied.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/image.png" class="kg-image" alt loading="lazy" width="468" height="329"></figure><p>Next, you&apos;ll want to choose the role you want to authorize the Admin group to. Then, under Assignments, you&apos;ll assign the Scoping Tag, the User Group of the admins, and the Devices Group.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/image-1.png" class="kg-image" alt loading="lazy" width="751" height="511" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/image-1.png 600w, https://pierewoehl.de/content/images/2024/06/image-1.png 751w" sizes="(min-width: 720px) 720px"></figure><p>Now you are set.</p><p>Don&apos;t be confused by the messages seen in the Members and Scope Groups selection screen, they need to be read like that:</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/image-3.png" class="kg-image" alt loading="lazy" width="421" height="89"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/image-2.png" class="kg-image" alt loading="lazy" width="551" height="85"></figure><p>Admin group users are the administrators assigned to this role (This is the mentioned role assignment), administrators in this role assignment can target policies, applications and remote tasks <strong>to devices  member of the groups listed below</strong></p><p>That&apos;s all there is to it. You can assign multiple scope tags to devices if you have people managing different aspects but have overlapping device areas.</p>]]></content:encoded></item><item><title><![CDATA[2FA for Ghost Admin Panel]]></title><description><![CDATA[<p>Hi,</p><p>I setup this blog yesterday with Ghost, you are using it right now.<br>But I wanted to protect the Admin Panel with 2FA but Ghost does not support that.</p><p>I thought how else could I do that!</p><p>I use authentik to authenticate to almost any application I run in</p>]]></description><link>https://pierewoehl.de/2fa-for-ghost-admin-panel/</link><guid isPermaLink="false">6677c5b7f335f04668173388</guid><category><![CDATA[authentik]]></category><category><![CDATA[nginxproxymanager]]></category><category><![CDATA[npm]]></category><category><![CDATA[ghost]]></category><category><![CDATA[blog]]></category><category><![CDATA[admin]]></category><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sun, 23 Jun 2024 07:18:43 GMT</pubDate><content:encoded><![CDATA[<p>Hi,</p><p>I setup this blog yesterday with Ghost, you are using it right now.<br>But I wanted to protect the Admin Panel with 2FA but Ghost does not support that.</p><p>I thought how else could I do that!</p><p>I use authentik to authenticate to almost any application I run in my Network / Home Lab and so on. authentik has a proxy mode I already use with others. Now I tested it , for authentik you need to specify which paths need to be unauthenticated, so I tried it with the following REGEX: </p><pre><code>^((?!ghost).)*$
</code></pre>
<p>Put I was bummed to find out that authentik uses Go and that does not have native negative lookahead support. So how could I else accomplish this.<br>I thought, to get my proxy to the authentik container I already use Nginx Proxy Manager, is there a way ?<br>Yes there is, Nginx Proxy Manager allows you to configure custom locations on the proxy host config<br>How you then go forward ?</p><p>Step 1: Create your authentik Proxy provider</p><ol><li>Under Applications &gt; Providers click Create</li><li>Name it &apos;Ghost&apos;</li><li>Select Proxy provider</li><li>Then as the Authorization Flow use the implicit-consent</li><li>Use the Proxy mode</li><li>As the External Host use your domain without any paths and https as you should always in public sites.</li><li>for Internal use the IP or hostname of your provider or container where you host Ghost</li></ol><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/Screenshot-2024-06-23-005838.png" class="kg-image" alt loading="lazy" width="1106" height="706" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/Screenshot-2024-06-23-005838.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/Screenshot-2024-06-23-005838.png 1000w, https://pierewoehl.de/content/images/2024/06/Screenshot-2024-06-23-005838.png 1106w" sizes="(min-width: 720px) 720px"></figure><p>Leave then the Unauthenticated paths empty</p><p>Create your Application as you did with any other Application and select the Ghost Provider, then update your authentik embedded outpost.</p><p>So now for Nginx Proxy Manager<br>For this to work is to know that authentik expects to have an embedded redirection for the outpost, you can see them under Protocol Settings in the authentik Application named &apos;Allowed Redirect URIs&apos;</p><pre><code>https://pierewoehl.de/outpost.goauthentik.io/callback\?X-authentik-auth-callback=true  
https://pierewoehl.de\?X-authentik-auth-callback=true
</code></pre>
<p>For me I only needed the outpost to redirect maybe the second one is not needed then</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://pierewoehl.de/content/images/2024/06/Screenshot-2024-06-23-005628.png" class="kg-image" alt loading="lazy" width="500" height="546"><figcaption><span style="white-space: pre-wrap;">Proxy Host - Main Page</span></figcaption></figure><p>As you can see in the screenshot you simply configure the base domain with the target hosting provider you run your ghost on</p><p>Then in the custom locations you add two locations</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://pierewoehl.de/content/images/2024/06/Screenshot-2024-06-23-005636.png" class="kg-image" alt loading="lazy" width="498" height="890"><figcaption><span style="white-space: pre-wrap;">Custom Locations</span></figcaption></figure><p>You add one for /ghost which is the protected URL with authentik and then you configure the outpost to also in the backend be resolved to the authentik-server</p><p>This is all on the same docker host, therefore authentik internally does not use https</p>]]></content:encoded></item><item><title><![CDATA[Restoring your contacts from a broken Nextcloud instance]]></title><description><![CDATA[Hi,]]></description><link>https://pierewoehl.de/restoring-you-contacts-from-a-broken-nextcloud-instance/</link><guid isPermaLink="false">6676d858f880cf131584ea1b</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Wed, 17 Aug 2022 21:29:13 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-6d71xt3fpxvpioyr.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-6d71xt3fpxvpioyr.jpg" alt="Restoring your contacts from a broken Nextcloud instance"><p>Hi,</p><p>My Nextcloud instance broke a couple of days ago and could&#x2019;nt get it running again on the original config, to recover my contacts I looked in the database and I find the raw vcard data (which is the common sharing format of contacts) in the database.</p><p>With this code in python you can restore the files for you to import them to any addressbook like Google, Outlook or iCloud, or back into a Nextcloud.</p>
<!--kg-card-begin: html-->
<script src="https://gist.github.com/pierew/8bdb0ae2cf2fc7e4de00d39a88c7dc0f.js"></script>
<!--kg-card-end: html-->
<p>please note that you need to search for your addressbook Id via SQL before. You could use adminer or a graphical SQL-client.</p>]]></content:encoded></item><item><title><![CDATA[S3 mit Hetzner StorageBox]]></title><description><![CDATA[Hi, ich bin gerade dabei mein Offsite backup vom S3 Storage Hoster Wasabi zu eine Hetzner StorageBox umzuziehen.]]></description><link>https://pierewoehl.de/s3-mit-hetzner-storagebox/</link><guid isPermaLink="false">6676d858f880cf131584ea1c</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Thu, 10 Mar 2022 16:47:06 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/Hetzner-Logo-slogan_space-trans.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-embed-card kg-card-hascaption"><iframe src="https://www.youtube.com/embed/CUA5Gj0y0r4?feature=oembed" width="700" height="393" frameborder="0" scrolling="no"></iframe><figcaption><img src="https://pierewoehl.de/content/images/2024/06/Hetzner-Logo-slogan_space-trans.png" alt="S3 mit Hetzner StorageBox"><p><span style="white-space: pre-wrap;">Hetzner StorageBox mit S3 bereitstellen</span></p></figcaption></figure><p>Hi, ich bin gerade dabei mein Offsite backup vom S3 Storage Hoster Wasabi zu eine Hetzner StorageBox umzuziehen.</p><p>Leider bietet mir Hetzner kein S3 Protokoll an um die StorageBox anzusteuern, ich kann meine Backups auch via rsync auf die StorageBox sichern jedoch wollte ich ausprobieren ob das nicht auch mit S3 moeglich waere.</p><p>Nun ich bin auf MinIO gestossen, eine Open Source Implementierung des S3 Storage Protokolls.</p><p>Da ich bereits einen Hetzner vServer habe als Ingress Point zu meinen Diensten habe ich mir gedacht, koennte ich doch solch einen nehmen um meine StorageBox via S3 zugaenglich zu machen.</p><h3 id="storagebox-einbinden">Storagebox einbinden</h3><p>Zu aller erst habe ich in der StorageBox SAMBA aktiviert und externen Zugriff deaktiviert, damit ich mir um die SMB Credentials uebers Internet nicht soviel sorgen machen muss.</p><p>Nun auf dem vServer habe ich dann die cifs-utils installiert und dann den cifs mount in der fstab eingetragen und die StorageBox zu eingebunden.//u294301.your-storagebox.de/backup /mnt/u294301 cifs defaults,auto,uid=1000,gid=1000,user=u294301,password=***********</p><h3 id="minio-konfigurieren">MinIO konfigurieren</h3><p>MinIO verwendet fuer seinen standard dienst den user minio-user welcher noch angelegt werden muss, da es sich bei mir um einen blanken rockylinux server handelt, denn ich bei bedarf direkt mit root verwalte habe ich so auch keinen anderen user gehabt, weshalb bei mir dieser die id 1000 erhalten hat, welche ich auch angegeben habe beim mount.</p><p>um MinIO nun zu installieren erstelle ich zuerst den user mit:$ adduser minio-user</p><p>dann holfe ich mir den aktuellen RPM installationsbefehl von der minio download page: <a href="https://min.io/download?ref=pierewoehl.de#/linux" rel="nofollow noopener">https://min.io/download#/linux</a> und waehle RPM aus$ dnf install https://dl.min.io/server/minio/release/linux-amd64/minio-20220308222851.0.0.x86_64.rpm</p><p>Dies hier oben ist die aktuelle Version von Heute. Nachdem die installation durch ist muss man noch die minio config erstellen, unter /etc/default/minio/etc/default/minio:<br>MINIO_ROOT_USER=admin<br>MINIO_ROOT_PASSWORD=**************<br>MINIO_VOLUMES=&quot;/mnt/u294301&quot;<br>MINIO_OPTS=&quot;--console-address :9001&quot;</p><p>Damit wird der admin account angelegt, das volume definiert und die Console eingeschaltet, um den S3 Dienst zu verwalten.</p><p>In diesem Fall ist aber noch kein SSL eingerichtet, wenn man jetzt mit$ systemctl enable --now minio</p><p>den Dienst starten wuerde.</p><h3 id="ssl-einrichten"><strong>SSL einrichten</strong></h3><p>Folgende einstellung ist noch in der config file notwendig:MINIO_SERVER_URL=&quot;https://s3.storagebox.mydomain.de&quot;</p><p>Nun muessen wir auch dieses SSL Zertifikat erstellen, dafuer gibt es auch eine anleitung fuer LetsEncrypt: <a href="https://docs.min.io/docs/generate-let-s-encypt-certificate-using-concert-for-minio.html?ref=pierewoehl.de" rel="nofollow noopener">https://docs.min.io/docs/generate-let-s-encypt-certificate-using-concert-for-minio.html</a></p><p>In meinem fall werde ich dies manuell mit certbot machen. Der Ordner wurde daf&#xFC;r bereits von MinIO angelegt. Dieser ist unter /home/minio-user/.minio/certs zu finden.</p><p>Zum anlegen des LetsEncrypt certificates ist folgendes notwendig einzugeben:$ certbot certonly --standalone -d s3.storagebox.mydomain.de --stable-oscp -m hostmaster@mydomain.de --agree-tos</p><p>Nun nachdem die Erstellung erfolgreich war kann ich nun das zertifikat an den richtigen Ort kopieren</p><pre><code>$ cp /etc/letsencrypt/live/s3.storagebox.mydomain.de/fullchain.pem /home/minio-user/.minio/certs/public.crt 
$ cp /etc/letsencrypt/live/s3.storagebox.mydomain.de/privkey.pem /home/minio-user/.minio/certs/private.key 
$ sudo chown minio-user:minio-user /home/minio-user/.minio/certs/private.key 
$ sudo chown mini-user:mino-user /home/minio-user/.minio/certs/public.crt</code></pre><p>Hier ist es wichtig dann auch den certbot-autorenew zu aktivieren und die kommandos von oben in ein post-hook script zu speichern damit bei jeder erneuerung auch minio die zertifikate am richtigen ordner hat.</p><h3 id="region-konfigurieren">Region konfigurieren</h3><p>Man hat auch noch die Moeglichkeit die Region zu konfigurieren mitMINIO_SITE_NAME=&quot;fsn1-dc14&quot;<br>MINIO_SITE_REGION=&quot;eu-central&quot;</p><hr><p>GANZ WICHTIG!!!: bitte stelle den root account von der Console auf ein sehr sicheres passwort was auch mehrmals rotiert werden sollte. oder im privaten gebrauch blockiere per firewall den Zugang auf die Konsole wenn sie nicht gerade benoetigt wird.</p>]]></content:encoded></item><item><title><![CDATA[Managing WindowsHosts with Group Policies and scripts without paid apps and domain]]></title><description><![CDATA[Hi,]]></description><link>https://pierewoehl.de/managing-windowshosts-with-group-policies-and-scripts-without-paid-apps-and-domain/</link><guid isPermaLink="false">6676d858f880cf131584ea1e</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sat, 15 Jan 2022 19:21:32 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/1-sph4d5fk3zqorq7ylabagw.png" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/1-sph4d5fk3zqorq7ylabagw.png" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain"><p>Hi,</p><p>I&#x2019;m Piere, I manage some standalone windows systems for friends and family members as VMs for Office use and some other stuff.</p><p>I don&#x2019;t want to manage them through a Domain because using a DC and VPNs and remote systems is so much work.</p><p>Therefore I seeked some simpler solution for this.</p><p>I noticed that the Windows gpclient (Group Policy Agent) stores it&#x2019;s local group policy at the following location: C:\Windows\system32\GroupPolicy. And I know a software to synchronize those settings around to other systems regardless of their network connectivity: syncthing.</p><p>To access those folders the process needs to be system.</p><p>So I did the following:</p><ol><li>Download syncthing for Windows (<a href="https://syncthing.net/downloads/?ref=pierewoehl.de" rel="nofollow noopener">https://syncthing.net/downloads/</a>).</li><li>Extract to to C:\Windows\System32\config\systemprofile\syncthing</li><li>Now add this executable as a Task to the Task Scheduler</li></ol><h3 id="establish-group-policy-synchronization">Establish Group Policy synchronization</h3><p>I created for this a folder in the Task Scheduler library named IT Management</p><p>I changed the User account the task is running at to system and set it to ignore power settings (conditions) and to do not stop.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-fie1n5sv-qnicrm4zd3fxq.png" class="kg-image" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain" loading="lazy" width="638" height="489" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-fie1n5sv-qnicrm4zd3fxq.png 600w, https://pierewoehl.de/content/images/2024/06/1-fie1n5sv-qnicrm4zd3fxq.png 638w"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-5okdyuevtjsmypyimusipg.png" class="kg-image" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain" loading="lazy" width="644" height="500" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-5okdyuevtjsmypyimusipg.png 600w, https://pierewoehl.de/content/images/2024/06/1-5okdyuevtjsmypyimusipg.png 644w"></figure><p>And I started the Task, here you can get the task config file you could import:<br>After this the SyncThing WebUI is accessible at <a href="https://localhost:8384/?ref=pierewoehl.de">http://localhost:8384</a> After logging in I declined the dialog to send usage data and deleted the default Sync folder. You get asked to not start syncthing as a system, for our use-case this is OK because this is actually intentional. To protect the settings I setup to protect the Web UI with a Admin Account.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-qmsgcymggapeqclqj3gzkw.png" class="kg-image" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain" loading="lazy" width="914" height="507" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-qmsgcymggapeqclqj3gzkw.png 600w, https://pierewoehl.de/content/images/2024/06/1-qmsgcymggapeqclqj3gzkw.png 914w" sizes="(min-width: 720px) 720px"></figure><p>This is for the Main system where the settings are configured and pushed out from. On the Client systems this is not necessary but for management purposes you could set the GUI Listen Address to some ZeroTier Admin Network and restrict in ZeroTier the Admin PC to connect to the IPs. So you could setup other folder syncs if you wish to remotely.</p><p>Now after I configured all the systems with SyncThing and ZeroTier (only if you wish too) I add the to the main system as trusted hosts.</p><p>Now I add the new folder to sync to all the clients and use the folder path for the GPOs I mentioned earlier and a proper Folder ID like (Group Policies). and a Folder Type of Send Only and Ignore Permissions.</p><p>Now all Client&#x2019;s get a push notification to Add the System to their trust list and to add the shared folder, which you accept and use the same folder and set this to Receive Only and Ignore Permissions.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-mhx5hkb0-czwyozdglqamq.png" class="kg-image" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain" loading="lazy" width="567" height="339"></figure><p>I also have some Registry Configurations I set with this method, the Local Group Policy editor does not allow to set these. therefore I set a PowerShell Script to set these instead. Is just as easy as with the GUI version, potentially easier.</p><p>Please be aware to set HKCU Settings as a User Logon Script rather than a Computer Startup Script.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-kgwlipiz69j5rlmejzfjng.png" class="kg-image" alt="Managing WindowsHosts with Group Policies and scripts without paid apps and domain" loading="lazy" width="947" height="662" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-kgwlipiz69j5rlmejzfjng.png 600w, https://pierewoehl.de/content/images/2024/06/1-kgwlipiz69j5rlmejzfjng.png 947w" sizes="(min-width: 720px) 720px"></figure><p>If you click on Show Files you can see the folder where those scripts are used from, there you place yours.</p><p>Side-Note: to deploy applications I use winget and choco (chocolatey) as powershell startup scripts.</p><p>To get reports and inventories from the client I sync back a folder at the system account which contains reports generated from powershell scripts running periodically setup as a Task through a PowerShell Script.</p>]]></content:encoded></item><item><title><![CDATA[Changing a ProxMox VE Nodes name when it is not in a cluster [Re-Post]]]></title><description><![CDATA[Originally published at https://www.pierewoehl.de on February 17, 2020.]]></description><link>https://pierewoehl.de/changing-a-proxmox-ve-nodes-name-when-it-is-not-in-a-cluster-re-post/</link><guid isPermaLink="false">6676d858f880cf131584ea23</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Thu, 02 Dec 2021 19:27:05 GMT</pubDate><content:encoded><![CDATA[
<!--kg-card-begin: html-->
<script src="https://gist.github.com/pierew/8735bf1bd6e4228bae04799b6dace080.js"></script>
<!--kg-card-end: html-->
<hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/using-github-container-registry-with-azure-app-service/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on February 17, 2020.</em></p>]]></content:encoded></item><item><title><![CDATA[Parsing Microsoft XLSX and Docs to get Firewall Indicators]]></title><description><![CDATA[Working with Firewalls and Microsoft Products it is sometimes necessary to allow certain endpoints in your firewall. Unfortunately not all…]]></description><link>https://pierewoehl.de/parsing-microsoft-xlsx-and-docs-to-get-firewall-indicators/</link><guid isPermaLink="false">6676d858f880cf131584ea22</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Mon, 25 Oct 2021 08:16:33 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/1-kesnyfunc0qkphl7duq4va.png" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/1-kesnyfunc0qkphl7duq4va.png" alt="Parsing Microsoft XLSX and Docs to get Firewall Indicators"><p>Working with Firewalls and Microsoft Products it is sometimes necessary to allow certain endpoints in your firewall. Unfortunately not all are available through their provided Azure IP and Office 365 IP lists.</p><ul><li><a href="https://www.microsoft.com/en-us/download/details.aspx?id=56519&amp;ref=pierewoehl.de" rel="nofollow noopener">https://www.microsoft.com/en-us/download/details.aspx?id=56519</a></li><li><a href="https://docs.microsoft.com/de-de/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&amp;ref=pierewoehl.de" rel="nofollow noopener">https://docs.microsoft.com/de-de/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide</a></li></ul><p>They are only available as either a list in docs.microsoft.com or as an excel file, those are formats I can not use with my firewall.</p><p>To resolve this issue I build a AzureApp which polls these sources and create a json file which is consumable by many indicator-aggregators, like minemeld from Palo Alto Networks.</p><h4 id="defender-atp">Defender ATP</h4><p>When using Defender ATP there are certain Domains and IPs to be available to every system, regardless of their internet access policy, microsoft provides a excel file to allow you to view those endpoints: <a href="https://download.microsoft.com/download/8/a/5/8a51eee5-cd02-431c-9d78-a58b7f77c070/mde-urls.xlsx?ref=pierewoehl.de" rel="noopener">https://download.microsoft.com/download/8/a/5/8a51eee5-cd02-431c-9d78-a58b7f77c070/mde-urls.xlsx</a></p><h4 id="intune">Intune</h4><p>The same is with Microsoft Intune for Autopilot or management in general. To allow for a stable connection they provide a list in their docs: <a href="https://docs.microsoft.com/en-us/mem/intune/fundamentals/intune-endpoints?ref=pierewoehl.de" rel="nofollow noopener">https://docs.microsoft.com/en-us/mem/intune/fundamentals/intune-endpoints</a></p><hr><h3 id="the-azure-app">The Azure App</h3><p>I wanted to allow everybody to just consume those feeds, without having to setup a parser on their side, if you want to, it is open-source available here: <a href="https://github.com/pierew/serviceendpoints.azurewebsites.net?ref=pierewoehl.de" rel="nofollow noopener">https://github.com/pierew/serviceendpoints.azurewebsites.net</a></p><p>I had to first get off by understanding how to publish my docker container in Azure App Service, you can read that here: INSERT LINK</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-eb9wd0ofajs1kvk38t7uqw.png" class="kg-image" alt="Parsing Microsoft XLSX and Docs to get Firewall Indicators" loading="lazy" width="843" height="666" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-eb9wd0ofajs1kvk38t7uqw.png 600w, https://pierewoehl.de/content/images/2024/06/1-eb9wd0ofajs1kvk38t7uqw.png 843w" sizes="(min-width: 720px) 720px"></figure><p>Then I build the framework for the container, an endpoint file, necessary resources and packages for alpine were easy to find, which wasn&#x2019;t easy to get the correct converter for my file formats being present.</p><p>After some searching I settled on these two python packages:</p><ul><li>xlsx2csv (pip)</li><li>mdtable2csv (<a href="https://github.com/tomroy/mdtable2csv?ref=pierewoehl.de" rel="noopener">https://github.com/tomroy/mdtable2csv</a>)</li></ul><p>those commands are just as simple as they sound, you pass in the file you want to convert and they place the conversions in the current working directory or beside the specified file.</p><p>After I had the excel file to converted into some csv files I could parse them in bash (yes I know, I could do all of that in python but I wasn&#x2019;t this dedicated then). I first had to make sure that the service categories where in a machine readable form, therefore I searched and substituted these through a case-switch loop</p><p>This was the first version of the service, then available as intelligencefeeds, I changed the name now to serviceendpoints.</p><p>After that the need for Intune came up, so I searched for these resources, they are in a docs page, goosh, I thought I had to parse html but no! Microsoft has their docs in github as markdown files. Where Markdown files are there is a possibility to convert them to something else, I was right.</p><p>I extracted the table from the Markdown and used named package to convert them into csv files. which I used to create plain text files to serve for now.cat ./intune-endpoints.md | awk &apos;/client accesses/,/## Network requirements/&apos; | grep -v &quot;client accesses&quot; | grep -v &quot;## Network requirements&quot; &gt; ./intune-management-endpoints.md</p><p>My next problem was to sanitize the output from comma,spaces,lines and brake lines into new lines, sed come in handy for that:tr &quot; &quot; &quot;\n&quot; | sed &apos;/^$/d&apos;</p><p>Now the plain text generator was done, but I wanted also a json for minemeld, so I went back to the code editor and fiddled around. I knew jq was the tool to do json stuff in the command line. I quickly got a jq command working which created me proper json for every item I wanted to have in the json file.</p><p>Basically for every indicator aggregator each json object is one indicator, each domain, url, ip is his own indicator.jq -n --arg type &quot;URL&quot; --arg category &quot;management&quot; --arg url &quot;$item&quot; &apos;{type: $type, category: $category , url: $url}&apos;</p><p>These json elements still needed to be formatted to fit the json frame around it, so I used sed again:| sed &apos;s/}/},/&apos; | sed &apos;s/^/ &#xA0; &#xA0;/&apos; &gt;&gt;</p><p>Not all IPs were listed for their domains, so I had to implement a dns lookup which looks like this:for ip in $(host -t a $item | grep &quot;has address&quot; | cut -d&quot; &quot; -f4)</p><p>now the complete logic for the App was done, I published it now under it&#x2019;s new name at: <a href="https://serviceendpoints.azurewebsites.net/?ref=pierewoehl.de" rel="noopener">https://serviceendpoints.azurewebsites.net</a></p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-o4dzbhvpzu803ugxubpjqw.png" class="kg-image" alt="Parsing Microsoft XLSX and Docs to get Firewall Indicators" loading="lazy" width="622" height="377" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-o4dzbhvpzu803ugxubpjqw.png 600w, https://pierewoehl.de/content/images/2024/06/1-o4dzbhvpzu803ugxubpjqw.png 622w"></figure><hr><h3 id="minemeld">Minemeld</h3><p>Minemeld is a indicator aggregator to prepare indicator feeds for primarily Palo Alto Networks Firewalls, Minemeld is available as a docker container from PANs dockerhub profile &#x201C;paloaltonetworks/minemeld&#x201D;</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/1-1wfqsb46ba06arbnkfaung.png" class="kg-image" alt="Parsing Microsoft XLSX and Docs to get Firewall Indicators" loading="lazy" width="1200" height="529" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/1-1wfqsb46ba06arbnkfaung.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/1-1wfqsb46ba06arbnkfaung.png 1000w, https://pierewoehl.de/content/images/2024/06/1-1wfqsb46ba06arbnkfaung.png 1200w" sizes="(min-width: 720px) 720px"></figure><p>You can run the container with:docker run -dit --name minemeld --tmpfs /run -v minemeld-local:/opt/minemeld/local -v minemeld-logs:/opt/minemeld/log -p 8443:443 -p 8080:80 paloaltonetworks/minemeld</p><p>To now have minemeld query those feeds you need to create new prototypes based on existing one&#x2019;s, you do that by navigating into the config and opening the Prototype Browser which is the blue &#x201C;Hamburger&#x201D; Button.</p><p>Now you need to have an existing Prototype to create a new one from to modify it, it needs to have the correct parent-class to parse JSON. Fortunately aws.S3 is a json prototype, select it and click NEW.</p><p>Now use the miner configs from my GitHub: <a href="https://github.com/pierew/minemeld?ref=pierewoehl.de" rel="nofollow noopener">https://github.com/pierew/minemeld</a></p><p>After creating this new prototype, select it again in the browser and &#x201C;clone&#x201D; it, this creates the NODE which will pull the feed and filter it based on the set attributes and filters.</p><p>Now go back into the prototypes and clone the feedHCGreen Output, which will be responsible for providing the feed for your Firewall to be read properly.</p><p>If you want you can use an aggregator prototype for IPv4 and URL to aggregate those MS products into one feed, so you need only one feed for every indicator type, if you want to.</p><p>you can also place those prototypes into the /opt/minemeld/local/prototypes/minemeldlocal.yml file and it will be autodetected.</p><p>If you wish to have these configured for you, you can use my docker container here: <a href="https://github.com/pierew/minemeld-docker/pkgs/container/minemeld-docker?ref=pierewoehl.de" rel="nofollow noopener">https://github.com/pierew/minemeld-docker/pkgs/container/minemeld-docker</a></p>]]></content:encoded></item><item><title><![CDATA[pfSense and HAProxy — ACL for SNI host-name matching does not work]]></title><description><![CDATA[Photo by Compare Fibre / Unsplash]]></description><link>https://pierewoehl.de/pfsense-and-haproxy-acl-for-sni-host-name-matching-does-not-work/</link><guid isPermaLink="false">6676d858f880cf131584ea20</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sat, 18 Sep 2021 22:17:17 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-9ursarlpmmdf4zki.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-9ursarlpmmdf4zki.jpg" alt="pfSense and HAProxy&#x200A;&#x2014;&#x200A;ACL for SNI host-name matching does not work"><p>Photo by <a href="https://unsplash.com/@comparefibre?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit" rel="noopener">Compare Fibre</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit" rel="noopener">Unsplash</a></p><p>Hi i tried to publish the syncthing WebGUIs from my DMZ systems to my internaly accessible haproxy VIP on my pfSense firewall and couldn&#x2019;t figure out WHY I can&#x2019;t connect to the service, it seems to have error &#x201C;503 Server not found&#x201D; until I choose to use the default backend.</p><p>I was stuck for half an hour, this moment as I write this blog entry. I figured out:</p><p>HAProxy refers to the first match of the acl per IP in the frontends, NOT WITH THE PORTs in mind. I had to use a different ACL check that matches only this frontend I wanted.</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/0-qooe2r3wrqrelgk5.png" class="kg-image" alt="pfSense and HAProxy&#x200A;&#x2014;&#x200A;ACL for SNI host-name matching does not work" loading="lazy" width="1171" height="224" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/0-qooe2r3wrqrelgk5.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/0-qooe2r3wrqrelgk5.png 1000w, https://pierewoehl.de/content/images/2024/06/0-qooe2r3wrqrelgk5.png 1171w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/0-fx78ovbs2mgc6zh9.png" class="kg-image" alt="pfSense and HAProxy&#x200A;&#x2014;&#x200A;ACL for SNI host-name matching does not work" loading="lazy" width="1115" height="778" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/0-fx78ovbs2mgc6zh9.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/0-fx78ovbs2mgc6zh9.png 1000w, https://pierewoehl.de/content/images/2024/06/0-fx78ovbs2mgc6zh9.png 1115w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/0-vldaei6qnij4-jlw.png" class="kg-image" alt="pfSense and HAProxy&#x200A;&#x2014;&#x200A;ACL for SNI host-name matching does not work" loading="lazy" width="1184" height="900" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/0-vldaei6qnij4-jlw.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/0-vldaei6qnij4-jlw.png 1000w, https://pierewoehl.de/content/images/2024/06/0-vldaei6qnij4-jlw.png 1184w" sizes="(min-width: 720px) 720px"></figure><hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/pfsense-and-haproxy-acl-for-sni-host-name-matching-does-not-work/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on September 18, 2021.</em></p>]]></content:encoded></item><item><title><![CDATA[Using GitHub Container Registry with Azure App Service]]></title><description><![CDATA[Photo by Roman Synkevych / Unsplash]]></description><link>https://pierewoehl.de/using-github-container-registry-with-azure-app-service/</link><guid isPermaLink="false">6676d858f880cf131584ea21</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sat, 18 Sep 2021 19:58:51 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-jxhzimcgc8mpqwi6.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-jxhzimcgc8mpqwi6.jpg" alt="Using GitHub Container Registry with Azure App Service"><p>Photo by <a href="https://unsplash.com/@synkevych?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit" rel="noopener">Roman Synkevych</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit" rel="noopener">Unsplash</a></p><p>I had the need to connect my GHCR to an Azure App Service for CI/CD, but I didn&#x2019;t want to use GitHub Actions as Microsoft would like to have the integration done, I didn&#x2019;t like their implementation.</p><p>How I&#x2019;ve done it ?</p><p>See, you can use a private registry with Azure App Service</p><p>the Image is public, therefore I don&#x2019;t need any credentials passed in. To have the Continuous Deployment part enabled I created a GitHub Action curling the webhook url, since GitHub repository webhooks don&#x2019;t accept in-line authentication as Azure provides the url this way.</p><p>Here are the GitHub Actions I created for you to re-use:</p><!--kg-card-begin: html--><script src="https://gist.github.com/pierew/b98526cdf532266dffa3c0c5d2e50e2d#file-deployment-azure-app-service-yml.js"></script><!--kg-card-end: html--><p>Just make sure to update the URL for azure, you need to use your endpoint which is the &#x201C;name of the service&#x201D; in my case it is intelligencefeeds.</p><p>If you&#x2019;re curious this is a service providing the defender-atp service endpoints from microsoft and others following to use in your firewall</p><hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/using-github-container-registry-with-azure-app-service/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on September 18, 2021.</em></p>]]></content:encoded></item><item><title><![CDATA[My Journey recovering from a major UniFi Controller failure]]></title><description><![CDATA[I have quite a complicated UniFi Networking Setup segregating over 20 different networks and was using a well-known Third-Party UniFi…]]></description><link>https://pierewoehl.de/my-journey-recovering-from-a-major-unifi-controller-failure/</link><guid isPermaLink="false">6676d858f880cf131584ea1f</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Sat, 12 Jun 2021 20:01:19 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-h9cmpeaetpgo4gdp.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-h9cmpeaetpgo4gdp.jpg" alt="My Journey recovering from a major UniFi Controller failure"><p>I have quite a complicated UniFi Networking Setup segregating over 20 different networks and was using a well-known Third-Party UniFi Hosting Provider since 2018.</p><p>3 days ago they had updated their UniFi Controller, which I was using for free as a long-term Beta Member from years ago, it is broken, they did not admit it neither offered to help.</p><p>I requested to provide me with a backup since they have cut the Site Migrate and Backup functionality out of my account, they refused, so I was in need to re-do my network&#x2026;</p><p>Since my current config was still running on the UniFi Gear I thought about a smooth transition instead of rebuilding from scratch. Since I didn&#x2019;t want to completely overhaul my Rack just to have my proxmox node delivering the unifi controller as a VM be always connected to the gear and mess up my HA setup for it.</p><p>I have my UniFi Controller published for control traffic to serve other locations I support (Friends, Family) so I had to make sure it was always available from the Internet, this would also make my migration process easier, since the gear only needs internet to be adopted and re-integrated into the network</p><h3 id="the-before">The Before</h3><p>Above you see the basic Map view of the network and after that the Network Setup seen from a broader look.</p><h3 id="the-plan">The Plan</h3><ol><li>Install the new UniFi Controller into a VM on one proxmox host.</li><li>pre-configure the vlans and networks from documentation</li><li>Publish the Controller via pfSense/HAProxy</li><li>changing unifi controller dhcp ip in usg via cmdline</li><li>reset Access Points one by one</li><li>adopt them by the new controller</li><li>reset the switch2 and adopt it</li><li>prepare USG ports on switch2</li><li>move USG Patch cables (one by one keeping uplink)</li><li>Reset Coreswitch (activating the STP blocked secondary link between switch1,switch2)</li><li>adopt coreswitch and configure for DSL modem, USG und trunk ports</li><li>move proxmox node and one AP to switch2</li><li>reset switch 1, adopt it and configure NAS,Proxmox,UAP ports</li><li>move proxmox and AP back</li></ol><h3 id="the-result">The Result</h3><p>All went smoothly because I had planned for redundancy of internet, proxmox uplink und switch links so I never had to connect via ethernet to any switch to configure something being isolated from the rest of the network.</p><p>Well no, I had one issue to overcome, when resetting the DSL switch I lost connectivity to the unifi controller vm, since it&#x2019;s uplink was to the DSL connected pfsense, so I moved the VM into the VLAN of the modem to directly connect it the the Provider-NAT, just for the adoption of the core-switch.</p><p>But wait where is the USG ?<br>So this was a bit trickier, I had to make sure the controller had internet access while the central router was down, how I made it work ?</p><p>You remember the LTE Uplink being available to pfSense as a Fallback for the published services ?<br>Well here in Germany our LTE providers don&#x2019;t give us public IPs with Port Forwarding, while the contract was already a Home Internet Access not just a Data plan.</p><p>So I have a cloud hosted VM being a VPN exit node for the LTE connection, I simply added my DSL pfSense via VPN to the same instance and configured NAT to failover to each of the links when one goes down, effectively bypassing my CloudFlare Load-balancer listening on Port Forwarding on the DSL.</p><p>With this setup I could reset the USG while the controller was connected trough the VPN connections bypassing the USG</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/0-95w9qg_dp0n0-t6r.png" class="kg-image" alt="My Journey recovering from a major UniFi Controller failure" loading="lazy" width="1268" height="1168" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/0-95w9qg_dp0n0-t6r.png 600w, https://pierewoehl.de/content/images/size/w1000/2024/06/0-95w9qg_dp0n0-t6r.png 1000w, https://pierewoehl.de/content/images/2024/06/0-95w9qg_dp0n0-t6r.png 1268w" sizes="(min-width: 720px) 720px"></figure><p>How I connected to it after the reset?<br>I created a WLAN to be connected to the USGs LAN Native Network where I connected with my Mac to configure It.</p><p>After the USG was adopted I managed to migrate my entire unifi network to a new controller without any interruption in any hosted service.</p><p>The side-effect I can now cancel my CloudFlare Load-Balancer subscription!</p><hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/my-journey-recovering-from-a-major-unifi-controller-failure/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on June 12, 2021.</em></p>]]></content:encoded></item><item><title><![CDATA[pfSense/HAProxy — Rewrite Backend URL Request to masquarade path]]></title><description><![CDATA[Hi Community, I was setting up my pfSense to proxy through web resources so I didn’t have to deal with updating firewall rules when…]]></description><link>https://pierewoehl.de/pfsense-haproxy-rewrite-backend-url-request-to-masquarade-path/</link><guid isPermaLink="false">6676d858f880cf131584ea1a</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Fri, 18 Dec 2020 23:00:20 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-jrfaqg7vs_bs3zsh.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-jrfaqg7vs_bs3zsh.jpg" alt="pfSense/HAProxy&#x200A;&#x2014;&#x200A;Rewrite Backend URL Request to masquarade path"><p>Hi Community, I was setting up my pfSense to proxy through web resources so I didn&#x2019;t have to deal with updating firewall rules when service change and allow internet and dns in general to the containers and vms in general.</p><p>To do this you have to do some rewrite rules in the backend server definition</p><p>Let&#x2019;s split this rule up into its parts to understand it a bit more.</p><ul><li>The &#x201C;%[]&#x201D; indicates a function to run, or more like &#x201C;use this input in parameter one and process it through function defined in parameter two&#x201D;</li><li>The &#x201C;path&#x201D; is the variable containing the current path used by the request</li><li>The &#x201C;regsub&#x201D; is the function using regex with function &#x201C;s&#x201D; to substitute input via regular expression</li><li>The Input expression for the substitute &#x201C;/linux/epel/?&#x201D; is the incoming path from the frontend to be matched, the &#x201C;?&#x201D; says everything after that is passed through</li><li>The Output expression for the substitute &#x201C;/pub/epel/&#x201D; is the path to rewrite to, so for example.</li></ul><pre><code>Request on Frontend: mirror.woehl.network/linux/epel/ Rewrite on Frontend to Backend Server: [dl.fedoraproject.org]/linux/epel/ Rewrite on Backend to new Path: dl.fedoraproject.org/[pub/epel/]</code></pre><hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/pfsense-haproxy-rewrite-backend-url-request-to-masquarade-path/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on December 18, 2020.</em></p>]]></content:encoded></item><item><title><![CDATA[Proxmox: How to delete a template that is linked to VMs [ENG]]]></title><description><![CDATA[Today I planned to update my wordpress alpine lxc template, So I cloned it into a temp VM, edited and updated the VM and wanted to delete…]]></description><link>https://pierewoehl.de/proxmox-how-to-delete-a-template-that-is-linked-to-vms-eng/</link><guid isPermaLink="false">6676d858f880cf131584ea24</guid><dc:creator><![CDATA[Piere Wöhl]]></dc:creator><pubDate>Fri, 04 Dec 2020 22:00:43 GMT</pubDate><media:content url="https://pierewoehl.de/content/images/2024/06/0-nqv2gafr30gcjcbr.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pierewoehl.de/content/images/2024/06/0-nqv2gafr30gcjcbr.jpg" alt="Proxmox: How to delete a template that is linked to VMs [ENG]"><p>Today I planned to update my wordpress alpine lxc template, So I cloned it into a temp VM, edited and updated the VM and wanted to delete the old template and overwrite it with the new, dang&#x2026; it doesn&#x2019;t work</p><figure class="kg-card kg-image-card"><img src="https://pierewoehl.de/content/images/2024/06/0-hmkme292pbhsrprf.png" class="kg-image" alt="Proxmox: How to delete a template that is linked to VMs [ENG]" loading="lazy" width="808" height="402" srcset="https://pierewoehl.de/content/images/size/w600/2024/06/0-hmkme292pbhsrprf.png 600w, https://pierewoehl.de/content/images/2024/06/0-hmkme292pbhsrprf.png 808w" sizes="(min-width: 720px) 720px"></figure><p>Yep I had link-cloned it for another project container, sooo I had to do it the hard way.</p><p>How you ask ?</p><p>So Proxmox is running on ZFS as the storage driver, to resolve this issue I had to undo all linked references to the parent disk, or zfs volume if you are specific.</p><p>So I done it before but had to think twice how to go with it.</p><p>You do the following:</p><ol><li>Go through your VMs/LXC Container and look for a disk reference with a basedisk-xxx reference in it, this shows the reference to the basedisk of your template.</li><li>Select the Disk</li><li>Click on Move Disk</li><li>Select the storage your current disk lies on (no worries, it will move(clone) to a local disk as a full clone)</li><li>Change the view to the snapshots.</li><li>Delete all snapshots from the past.</li><li>Now select the disk from the Resources/Hardware overview and delete the now unused disk with basedisk-xxx/subxxxx</li></ol><p>Now you can delete your template and replace it with the new.</p><hr><p><em>Originally published at </em><a href="https://www.pierewoehl.de/proxmox-how-to-delete-a-template-that-is-linked-to-vms-eng/?ref=pierewoehl.de" rel="noopener"><em>https://www.pierewoehl.de</em></a><em> on December 4, 2020.</em></p>]]></content:encoded></item></channel></rss>