Posts

Create a public/anonymous Windows network drive on TrueNAS-13.0-U3.1Core 2022

Image
Capturing my step-by-step notes here on how to set up a public SMB share on a network using TrueNAS. So that next time I don't need to fumble around. If this helps someone else, that's a bonus. Starting from a completely fresh installation on a system with three hard drives. In this virtualised trial run, I have 10GB drive for TrueNAS installation, 2 x 32GB drives for storage. Login as root Storage / Pools / Add Create Pool Name: my pool Suggest Layout Create Confirm, Create Pool Options / Add Dataset Name: pub Submit Sharing / Windows Shares (SMB) / Add Path: /mnt/mypool/pub Advanced Options Allow Guest Access Submit Enable Service Configure ACL / Configure Now Default ACL Options OPEN Continue User: nobody Group: nobody Apply User + Apply Group Save Open your network drive in Finder Check that you can create a folder. Enjoy.

Preparing a release configuration Yocto image (for Raspberry Pi)

In an attempt to build a small image for an application running on Yocto and Raspberry Pi I found the following things helpful:      IMAGE_LINGUAS = " " I don't need support for any additional languages.      IMAGE_FEATURES += "read-only-rootfs" For my application fixed content and therefore read-only rootfs are fine. I'd imagine this could help reduce the notorious sd-card failures as well. IMAGE_INSTALL += "\     packagegroup-core-boot \     kernel-module- xxx \     busybox-udhcpc \     ntpdate \     ncurses \     " For a networked application, I just need to be able to boot, get an IP address and time. Then use a specific kernel driver, no need to install them all. Surprisingly, mDNS avahi seems to be missing dependency declaration to ncurses.  IMAGE_ROOTFS_SIZE="0" IMAGE_OVERHEAD_FACTOR="1.15" IMAGE_ROOTFS_EXTRA_SPACE="1" Since the rootfs is read-only, it's there's little need for extra space in the

OpenCV static library in Yocto

OpenCV is built either as a shared object or a static library, not both. Yocto recipe in the meta-oe builds the library as shared objects, which is a sensible default for sure. As a note to future self, one way to build a subset of OpenCV as a static library instead is to add the following in e.g. conf/local.conf PACKAGECONFIG:pn-opencv = " gstreamer" EXTRA_OECMAKE:append:pn-opencv = " \                                -DBUILD_SHARED_LIBS=OFF \                                -DENABLE_PIC=ON \                                -DWITH_PROTOBUF=OFF \                                -DBUILD_PROTOBUF=OFF \                                -DBUILD_opencv_python2=OFF \                                -DBUILD_opencv_python3=OFF \                                -DBUILD_JAVA=OFF \                                "

Patching an Android application

As a note to myself in the future, to modify Android .apk behaviour, these are the main steps: Initial setup of tools Install Android studio brew install jadx brew install smali brew install  Apktool Setup tools in the path export PATH=~/Library/Android/sdk/platform-tools:~/Library/Android/sdk/build-tools/30.*:$PATH Generate a signing key keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Either pull original package from device with developer mode enabled adb connect 192.168.x.x adb -s 192.168.x.x:5555 shell pm list packages | grep com.example adb -s 192.168.x.x:5555 shell pm path com.example.app adb -s 192.168.x.x:5555 pull /data/app/com.example.app/base.apk Or if you'd rather not enable developer mode Extract package using ML Manager: APK Extractor Copy package over to e.g. SMB network drive using  File Commander Manager Decompile package a nd figure out changes needed jadx base.apk -d . emacs &  # left as an 

Proxmox PCIe passthrough on HP gen8 - failed to set iommu for container

Problem Setting up PCIe passthrough from host to a VM was supposed to be easy. However, being an HP server, there was a bit more to it than usual. The VM simply refused to start when configured use Nvidia GPU from the host: vfio error: 0000:04:00.0: failed to setup container for group 21: failed to set iommu for container: Operation not permitted In dmesg there was a bit more background on what was wrong: fio-pci 0000:04:00.1: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor. Luckily, HP had issued a customer advisory on this. It describes a convoluted method to disable this RMRR per slot basis. It seems to work for me, so I thought I'd write down some notes if I ever run into this again. Basic setup Proxmox has decent instructions for preparing the host for passthrough setup in general, in summary: -  add  intel_iommu=on to   GRUB_CMDLINE_LINUX_DEFAULT  in the file  /etc/default/grub - add vfio modules to  /etc/modules

Backup and restore observium

Note to self, how to backup and restore an Observium installation such as Turn Key Linux . backup mysqldump --all--databases >sql.txt tar zcvf rrd.tgz /opt/observium/rrd restore log in at https://observium:12322/ as adminer import the sql.txt tar zxvf  rrd.tgz -C /

FreeNAS and Proxmox iscsi

Just taking notes on the experiment Freenas - Storage / pools   - mypool - add zvol     - zvol name myscsizvol     - size 20G - Sharing / Block iscsi - target global configuration   - leave at defaults - basename of iqn.2005-10.org.freenas.ctl - portal - add with defaults - authorized access - add   - group id 1   - user iscsi   - secret secret - targets - add   - name - iscsi-trial   - portal group id 1 - extents - add   - name myextent   - extent type device   - device mypool/myscsizvo - associated targets   - target iscsi-trial   - lun id 0   - extent myextent Proxmox - Datacenter / storage - add iSCSI   - ID iscsi   - portal - IP address of the freenas server   - target  iqn.2005-10.org.freenas.ctl:iscsi-trial   - use luns directly - check that   - node has storage 'iscsi' as "available", not "unknown".   - it's content tab has a disk image listed - create virtual machine   - Hard Disk     - Bus SCSI     -