Odd Wildcard Matching in Windows 10

I recently ran into an odd behavior of more files matching a pattern than I expected. I’d used exiftool to modify the dates on files my GoPro produced. It creates backup files of the original images when it modifies the tags. Here’s the command I ran.

exiftool.exe -r "-AllDates+=4:7:6 17:40:00" -ext jpg f:\GoPro\20170807

Now I had about 4000 files with the .JPG extension and another 4000 files with a .JPG_original extension.

I ran my program that parses the directory structure and turns all those images into a time lapse movie, and it seemed to be including both the file extensions, making a very disjointed movie.

I loaded my source code in the debugger and it seemed to be doing a findfirst / findnext specifically looking for .JPG files, and not some other extension, but it was definitely retrieving files both with .JPG and .JPG_original extensions.

I then ran a couple of commands at the windows command prompt and was surprised to find the same results there.

dir F:\GoPro\20170807\372GOPRO\G*.JPG /p
dir F:\GoPro\20170807\372GOPRO\G???????.JPG /p

Each command returned both the JPG and JPG_original files.

dir F:\GoPro\20170807\372GOPRO\G*.JPG_original /p

returned just the JPG_original files.

dir F:\GoPro\20170807\372GOPRO\G??????.JPG /p

had one less question mark and correctly returned no files.

This is all unexpected behavior, though I’m glad to see that it was consistent with the operating system and not something specific to the C runtime. I’d love an explanation of what’s going on.

SpeedTest.Net results from different devices

I’m visiting my parents today, and one of the normal things I run a check on is the condition of their internet.

I’ve got the speedtest.net app installed on my iPad. Running it produced acceptable results. 17Mb/s is not great, but it should be good enough to stream HD video, and that’s the main thing I want to just work when I’m not visiting.


I brought up the website in my browser on my Microsoft Surface tablet and received significantly better results.


71Mb/s download is almost comparable to what I’m getting at home. At home I’ve got symmetric bandwidth, so my upload speeds are often better than my download speeds.

Both of these tests were run through an old Cisco RV110W Wireless-N gateway that only runs on 2.4GHz frequencies.

I’ve registered significantly higher speed transfers on my iPad in the past.

Is the iPad limited in it’s transfer speed when running 2.4GHz? It’s possible that the higher speed transfers in my iPad history were all when I was connected to my home router running 5GHz.

My First PC

I found the receipt for the first PC my father bought this last weekend. They are in the process of significant downsizing, and while I don’t want to uselessly clutter up my own space with things that were in their garage, having a PC in my home for my final years of high school affected my entire life.

How much did you pay for your first PC and enough software to make it useful?

My father paid $6512.45 in 1983.

This was for a 64k PC running DOS 2.0, a word processor, and a printer.

MicroAge Computer Store IBM PC Invoice Page 1

MicroAge Computer Store IBM PC Invoice Page 1

MicroAge Computer Store IBM PC Invoice Page 2

MicroAge Computer Store IBM PC Invoice Page 2

The Hayes 1200bps Smart Modem alone cost $699.00.

This machine had two 360k floppy drives. I ran a bulletin board system that booted and ran from one floppy drive, and stored files available for download on the second drive.A box of 10 360k floppy disks cost $50.

The printer adapter was not built into the machine and had to be purchased separately for $150. One extravagance he purchased was the microbuffer for $349 that went in-line between the computer and printer, allowing the computer to send more of its print job to the microbuffer and the microbuffer would feed the printer at the speed it could accept it. This was long before anyone would think about using multitasking in a home computer, or even think that printing might be a separate task.

Samsung UD590 working with Gigabyte GEFORCE GTX 660

In a previous post I mentioned that I was having problems making my new Samsung 4k UHD monitor work at full resolution.

Gigabyte GEFORCE GTX 660

Gigabyte GEFORCE GTX 660

I ordered a new Gigabyte GEFORCE GTX 660 video card from newegg, removed the old video card, and now have three monitors plugged directly into this video card.



My center monitor is the new Samsung display running at 3840×2160 on the display port, and the left and right monitors are each HP Pavillion 22bw monitors running 1920×1080 using the DVI ports. The one strange thing is that Windows recognizes the dot pitch on the Samsung monitor and attempts to make things larger than I’d like. It is a configuration option to make text larger or smaller by following the link on the screen resolution dialog and I have moved the slider one notch smaller from the center.


Samsung UD590 Monitor

I bought a Samsung UD590 monitor from Amazon and it arrived Saturday May 3rd. It’s native resolution is 3840×2160, which is twice 1920×1080 in each direction. It came with a single HDMI cable and a display-port cable.

Samsung UD590

I bought this for it’s resolution and price. It sells for $699. It claims to be able to run at 60hz input at full resolution, where some of the other monitors in this price range only run 30hz. The box advertises 1ms fast response time, but I’m not certain how that translates.

My previous monitor setup has had an HP Pavilion 22bw as a center monitor, the same as a right monitor, and an old Samsung SyncMaster 205BW monitor in portrait mode as a left monitor.

ASUS SabreToothMy machine is based on an Asus Sabretooth Z87 TUF motherboard. I’ve got an Intel® Core™ i7-4771 Processor running with 32GB ram. I’ve been running the center monitor from the embedded GPU using the HDMI output on the motherboard. I’ve got an old NVIDIA GeForce 8600GT based graphics card driving the left and right monitors via DVI ports. It is an ASUS EN8600 GT Silent card. I’m running windows 8.1.ASUS EN8600GT

I haven’t decided if I want to switch to just using this new monitor, or if I want to keep using the two HP monitors as left and right flanks. My initial test had me plugging the new monitor into the display port on the motherboard and having the HP monitors plugged in via DVI on the card.

When I booted the machine initially, I saw the EUFI screen from the motherboard correctly on the new monitor, the opening screens of windows booting on the new monitor, and then the monitor went blank and only the pretty backgrounds were visible on my monitors on the sides. Through a bunch of trial and error, I figured out that if I reduced the resolution on the Samsung in windows from 3840×2160 to 2560×1440 things worked without going blank. I went so far as to remove the NVIDIA card entirely to see if it was some sort of interaction, but that didn’t seem to help.

By total chance I found out that if I have a large amount of constant white space on the screen I can run the monitor at full resolution. If I’ve got an empty copy of Notepad filling the screen, then the screen runs fine at its native resolution. But if I load an app that throws any level of color complexity on the screen, it shows the image, then goes all black, and blinks the image up approximately one second out of ten.

I don’t understand if this problem is related to the monitor, or related to the motherboard output, or possibly even the cable. I’m using the cable that came with the monitor, so I’ve been discounting that. I’m assuming the problem has to do with the bandwidth of driving 3840×2160 at 32 bit color.

I don’t mind going out and buying a new display card to drive the monitor, but I’m not a gamer so don’t want to spend money for a top of the line gaming card when all that I want to do is drive high resolution and have a reasonable refresh rate for displaying video. Ideally a video card would be able to drive three monitors. I expect I’d drive the big monitor via display-port, and the secondary monitors via a display-port to DVI cable.

Any suggestions as to what exactly my problem is are useful.

BeagleBoneBlack 5.8GHz WiFi Reliability

After upgrading the operating system, providing more power via a powered USB Hub, and better understanding the startup scripts, I seem to have a reliable WiFi link from my BBB.

I still have occasional problems at boot time with the device not connecting to my WiFi network. I’ve got an FTDI USB-SerialTTL console cable that I can connect to the device and examine the status. Most of the time when I’ve not been able to reach the device over the network and I do this, running the lsusb command produces results showing nothing connected beyond the internal USB devices.

root@beaglebone:~# lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

If I disconnect the USB hub, remove and reapply it’s power, and reconnect the USB hub, sometimes it will cause the BBB to recognize the USB devices, but often it requires removing all power, disconnecting the hub, and reconnecting everything.

USB Power is the first issue in getting things to work. I only have the verbose reports from the lsusb command to go on for deciding how much power I need. The spec sheet for the BBB reports that it can only supply 500 mA on it’s USB port, and even then only if it’s powered by an external power adapter via the barrel jack. My WiFi adapter reports 450 mA. My camera reports 500mA. The hub in self powered operation reports 100mA. The power adapter that came with my hub reports it’s output as 2.1A, which would indicate that it should be able to provide the standard 500mA to each of it’s 4 ports if it’s running on external power.

root@beaglebone:~# lsusb ; lsusb --verbose | grep MaxPower
Bus 001 Device 002: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 13b1:002f Linksys AE1000 v1 802.11n [Ralink RT3572]
Bus 001 Device 004: ID 046d:082d Logitech, Inc.
    MaxPower              100mA
    MaxPower                0mA
    MaxPower                0mA
    MaxPower              450mA
    MaxPower              500mA

I’m running a system that I started by flashing my eMMC with the 9/4/2013 image I downloaded from http://circuitco.com/support/index.php?title=Updating_The_Software#Procedure

The dmesg command reports the kernel as “Linux version 3.8.13 (koen@rrMBP) (gcc version 4.7.3 20130205 (prerelease) (Linaro GCC 4.7-2013.02-01) ) #1 SMP Wed Sep 4 09:09:32 CEST 2013”

I am running with a 32GB micro sd card installed, and partitioned into two volumes. In the root of the FAT volume I’ve got a uEnv.txt file that continues the boot process to the eMMC and it also issues the kernel command to disable the internal HDMI cape on the BBB. Since I’m only running this device over the network, I have decided it is more efficient to disable the HDMI entirely. I don’t think that the HDMI changes affect my WiFi, but I’ve not investigated it either.

root@beaglebone:~# fdisk -l /dev/mmcblk0 /dev/mmcblk1

Disk /dev/mmcblk0: 31.9 GB, 31914983424 bytes, 62333952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000

        Device Boot      Start         End      Blocks   Id  System
/dev/mmcblk0p1            2048    41945087    20971520    c  W95 FAT32 (LBA)
/dev/mmcblk0p2        41945088    62333951    10194432   83  Linux

Disk /dev/mmcblk1: 1920 MB, 1920991232 bytes, 3751936 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000

        Device Boot      Start         End      Blocks   Id  System
/dev/mmcblk1p1   *          63      144584       72261    c  W95 FAT32 (LBA)
/dev/mmcblk1p2          144585     3743144     1799280   83  Linux

root@beaglebone:~# cat /media/BONEBOOT/uEnv.txt
optargs=quiet capemgr.disable_partno=BB-BONELT-HDMI,BB-BONELT-HDMIN

root@beaglebone:~# cat /etc/fstab
rootfs               /                    auto       defaults              1  1
proc                 /proc                proc       defaults              0  0
devpts               /dev/pts             devpts     mode=0620,gid=5       0  0
tmpfs                /tmp                 tmpfs      defaults              0  0
/dev/mmcblk0p2       /home                auto       defaults              0  2
/dev/mmcblk0p1       /media/BONEBOOT      auto       defaults              0  2
/dev/sda1            /media/PNY           auto       noauto                0  2
/dev/mmcblk1p1       /media/BEAGLEBONE    auto       ro                    0  2

I have created a file /var/lib/connman/wifi.config that has two sections, one for each of the wifi networks that I regularly connect to. The first is my primary network, and it seems to be stable connecting. The second is a network I occasionally power up, but I’ve not spent much time testing it. The good thing is that the credentials are in one place, and it’s supposed to chose the first network in the list that is found.

root@beaglebone:~# cat /var/lib/connman/wifi.config
Type = wifi
Name = WimsWorld-5G
Security = wpa2-psk
Passphrase = MyPasswordInPlainText

Type = wifi
Name = WimsWorld-UAV
Security = wpa2-psk
Passphrase = MyPasswordInPlainText

I created /etc/udev/rules.d/70-wifi-powersave.rules following the information in https://wiki.archlinux.org/index.php/Power_saving#Network_interfaces , paying explicit attention to the fact that naming the file matters.

In this case, the name of the configuration file is important. Due to the introduction of persistent device names via 80-net-name-slot.rules in systemd v197, it is important that the network powersave rules are named lexicographically before 80-net-name-slot.rules, so that they are applied before the devices are named e.g. enp2s0.

root@beaglebone:~# cat /etc/udev/rules.d/70-wifi-powersave.rules
ACTION=="add", SUBSYSTEM=="net", KERNEL=="wlan*", RUN+="/usr/sbin/iw dev %k set power_save off"

The iw dev wlan0 set power_save off command disables a WiFi feature called power save mode. I believe it is part of the 802.11 standard, but support varies by driver and chipset. It gets negotiated between the client device and the access point on authentication. If it is enabled, the access point may buffer multiple small packets before sending them to the client and the client spends less time either transmitting or receiving. If I run the command ping -t from my windows machine with power_save off, the time is very stable at 1 to 2ms. If I get a connection with power_save on, the time varies greatly with most times reported over 100ms.

My home network has plenty of nearby networks to conflict with.

root@beaglebone:~# iw wlan0 scan | grep SSID | sort
        SSID: Aman-Guest
        SSID: Aman2.4G
        SSID: Aman5G
        SSID: Angela's Wi-Fi Network
        SSID: Battlestar Galactica
        SSID: Battlestar Galactica
        SSID: CenturyLink0705
        SSID: Cyberia
        SSID: Dagobah
        SSID: Derek's Wi-Fi Network
        SSID: HP-Print-60-LaserJet 100
        SSID: HSE-1305(a) .media
        SSID: Jaggernet
        SSID: Jaggernett
        SSID: Joergstrasse
        SSID: Joergstrasse5
        SSID: Joshernet
        SSID: MOTOROLA-06F23
        SSID: NCH1205
        SSID: NCH515
        SSID: NCH611
        SSID: NETGEAR84
        SSID: Paris
        SSID: PhishingNet
        SSID: Poop2 5GHz
        SSID: PoopTime
        SSID: SMC
        SSID: Se1301
        SSID: Seattle2GHz
        SSID: SusansWIFI
        SSID: WimsWorld
        SSID: WimsWorld-5G
        SSID: XVI
        SSID: bedford
        SSID: bedford
        SSID: go-seahawks
        SSID: goodtimes
        SSID: goodtimes-guest
        SSID: ladines
        SSID: maverick
        SSID: mridula_air
        SSID: shubaloo
        SSID: shubaloo-5g
        SSID: washington

One other change that I made was to disable the cpu-ondemand.timer service with the command:

systemctl disable cpu-ondemand.timer

I don’t know if that has affected my WiFi stability, but it has certainly made my overall system more stable. By default this service runs after the BBB has been running for ten minutes, and then puts the system clock into variable mode with the command cpufreq-set -g ondemand. I ran into problems with my machine changing it’s internal frequency on a regular basis. for my purposes, I chose to leave the CPU in it’s default state, running with the performance governor, which leaves it at 1000 MHz. run the command cpufreq-info to see what state the BBB is currently in, and what it’s possible to change it to.

My machine seems to be stable right now, as can be shown by nothing being added to the dmesg log since the initial boot, 19 and a half hours ago.

root@beaglebone:~# dmesg | tail -32 ; uptime
[    9.360135] usb0: eth_open
[    9.360359] IPv6: ADDRCONF(NETDEV_UP): usb0: link is not ready
[   10.281944] gs_open: ttyGS0 (dcaccc00,dcaa8600)
[   10.282105] gs_close: ttyGS0 (dcaccc00,dcaa8600) ...
[   10.282119] gs_close: ttyGS0 (dcaccc00,dcaa8600) done!
[   10.283944] gs_open: ttyGS0 (dcaccc00,dcd1f980)
[   11.637465] usb0: stop stats: rx/tx 0/0, errs 0/0
[   11.742846] ip_tables: (C) 2000-2006 Netfilter Core Team
[   12.058808] net eth0: initializing cpsw version 1.12 (0)
[   12.070772] net eth0: phy found : id is : 0x7c0f1
[   12.070810] libphy: PHY 4a101000.mdio:01 not found
[   12.075883] net eth0: phy 4a101000.mdio:01 not found on slave 1
[   12.133068] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[   12.694713] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready
[   18.301568] wlan0: authenticate with 20:4e:7f:85:ce:5b
[   18.327171] wlan0: send auth to 20:4e:7f:85:ce:5b (try 1/3)
[   18.327734] wlan0: authenticated
[   18.336184] wlan0: associate with 20:4e:7f:85:ce:5b (try 1/3)
[   18.337359] wlan0: RX AssocResp from 20:4e:7f:85:ce:5b (capab=0x411 status=0 aid=2)
[   18.342420] wlan0: associated
[   18.342545] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
[   18.342777] cfg80211: Calling CRDA for country: US
[   18.342940] cfg80211: Regulatory domain changed to country: US
[   18.342951] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp)
[   18.342962] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm)
[   18.342973] cfg80211:   (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm)
[   18.342983] cfg80211:   (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.342993] cfg80211:   (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.343003] cfg80211:   (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
[   18.343013] cfg80211:   (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm)
[   18.343022] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm)
[   18.418237] wlan0: Limiting TX power to 23 (23 - 0) dBm as advertised by 20:4e:7f:85:ce:5b
 16:34:09 up 19:35,  1 user,  load average: 0.03, 0.07, 0.05

Webcam on BeagleBoardBlack using OpenCV

I’ve been working with my BBB and Logitech C920 webcam trying to stream video at low latency for some time and have not yet managed to get the latency under 2 seconds.

As a side project I wanted to use the BBB to create a time lapse video, capturing a picture a second, and then later stitching all of the pictures into a video using ffmpeg.

I’m using OpenCV for the first time. I’m really only using it for the capture/save and to draw some text and lines onto the image, which probably makes OpenCV significant overkill.

My C++ code for the process is:

#include <iostream> // for standard I/O
#include <string>   // for strings
#include <iomanip>  // for controlling float print precision
#include <sstream>  // string to number conversion
#include <unistd.h> // for sleep
using namespace std;
using namespace cv;

std::string timeToISO8601(const time_t & TheTime)
	std::ostringstream ISOTime;
	struct tm * UTC = gmtime(&TheTime);
	ISOTime << UTC->tm_year+1900 << "-";
	ISOTime << UTC->tm_mon+1 << "-";
	ISOTime << UTC->tm_mday << "T";
	ISOTime << UTC->tm_hour << ":";
	ISOTime << UTC->tm_min << ":";
	ISOTime << UTC->tm_sec;
	ISOTime << "Z";
std::string getTimeISO8601(void)
	time_t timer;

int main()
    VideoCapture capture(-1);	// Using -1 tells OpenCV to grab whatever camera is available.
	    std::cout << "Failed to connect to the camera." << std::endl;
    //capture.set(CAP_PROP_FRAME_WIDTH,2304);	// This should be possible for still images, but not for 30fps video.

	for (int OutputFolderNum = 100;	OutputFolderNum < 1000; OutputFolderNum++)
		for (int OutputImageNum = 1; OutputImageNum < 10000; OutputImageNum++)
			Mat C920Image;
		    capture >> C920Image;
				std::ostringstream OutputFilename;
				OutputFilename << "/media/BONEBOOT/DCIM/";
				OutputFilename << OutputFolderNum;
				OutputFilename << "WIMBO/img_";
				OutputFilename << OutputImageNum;
				OutputFilename << ".jpg";

				line(C920Image, Point(0, C920Image.rows/2), Point(C920Image.cols, C920Image.rows/2), Scalar(255, 255, 255, 32)); // Horizontal line at center
				line(C920Image, Point(C920Image.cols/2, 0), Point(C920Image.cols/2, C920Image.rows), Scalar(255, 255, 255, 32)); // Vertical line at center

				circle(C920Image, Point(C920Image.cols/2, C920Image.rows/2), 240, Scalar(255, 255, 255, 32)); // Circles based at center
				putText(C920Image, "10", Point((C920Image.cols/2 + 240), (C920Image.rows/2)), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255));
				circle(C920Image, Point(C920Image.cols/2, C920Image.rows/2), 495, Scalar(255, 255, 255, 32)); // Circles based at center
				putText(C920Image, "20", Point((C920Image.cols/2 + 495), (C920Image.rows/2)), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255));
				circle(C920Image, Point(C920Image.cols/2, C920Image.rows/2), 785, Scalar(255, 255, 255, 32)); // Circles based at center
				putText(C920Image, "30", Point((C920Image.cols/2 + 785), (C920Image.rows/2)), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255));
				circle(C920Image, Point(C920Image.cols/2, C920Image.rows/2), 1141, Scalar(255, 255, 255, 32)); // Circles based at center
				putText(C920Image, "40", Point((C920Image.cols/2 + 1141), (C920Image.rows/2)), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255));

				string DateTimeText = "WimsWorld.com " + getTimeISO8601();
				int baseline=0;
				Size textSize = getTextSize(DateTimeText, FONT_HERSHEY_SIMPLEX, 1, 1, &baseline);
				putText(C920Image, DateTimeText, Point((C920Image.cols - textSize.width), (C920Image.rows - baseline)), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255));
				imwrite(OutputFilename.str(), C920Image);
				std::cout << DateTimeText << " Wrote File : " << OutputFilename.str() << std::endl;
			std::cout << getTimeISO8601() << "\r" << std::flush;
    return 0;

I compile it on the BBB with the command:

g++ -O2 `pkg-config --cflags --libs opencv` TimeLapse.cpp -o TimeLapse

I’ve got a bug in that I don’t automatically create the directory structure that I’m saving files into. That’s in the to-do list.

I had been interested in the angle of view on the C920 and found it defined on the Logitech support site that the “Diagonal Field of View (FOV) for the Logitech C920 is 78°”. Unfortunately I was not able to understand if that varied based on the resolution being used. I’m currently using the resolution of 1920×1080, but for stills the camera can capture up to 2304×1536.

I did the geometry math to figure out that 10° off center would be a radius of 240, 20° off center would be a radius of 495, and 30° off center would be a radius of 785. Remembering SOHCAHTOA as Some Old Hags Can’t Always Hide Their Old Age from 9th grade math class came in useful. Using 1920×1080 and 78°angle, my diagonal radius (opposite) works out at 1101 and angle of 39° for tangent, allowing me to calculate my eye height of 1360 = (1101/Tan(39°)). Once I had my eye height I could calculate the radius of circles at any angle by Radius = Tan(Angle) * EyeHeight.

I wanted the circles and angles of vision for my streaming video application and decided that seeing them drawn on the images created here would be helpful, along with both the horizontal and vertical center lines.

The thing I’m not happy with is that the application seems to be running between 30% and 60% of the CPU load on the BBB. When I stream video from the C920 using the native H.264 output the C920 can produce, I was only using about 3% of the BBB CPU. I’ve commented out my drawing code, and verified that the CPU load is primarily related to acquiring the image from the capture device and saving it out to a jpeg file. The lines and text drawing produce minimal incremental CPU. I want to keep the CPU load as low as possible because I’m powering this device from a battery and want it to have as long a runtime as possible.

I believe that the OpenCV library is opening the capture device in a movie streaming mode, and it’s using more CPU interpreting the stream as it’s coming in than the method I was using for streaming to a file. I’ve not yet figured out if there’s a way to define what mode OpenCV acquires the image from the camera.

I was trying to draw the lines and circles with some alpha transparency, but it seems that my underlying image is not the right number of channels and so the lines are being drawn fully opaque.

When the capture opens, it outputs several instances of the same error “VIDIOC_QUERYMENU: Invalid argument” that I’ve not figured out what they mean, or stopped procucing.

I am working on a 32GB flash card, partitioned into two 16GB filesystems. The first is Fat32, has a simple uEnv.txt file in the root allowing the BBB onboard flash to be used, and following the Design rules for Camera File systems standard for the image naming. It allows me to take out the card put it in a PC and it’s recognized just like a normal camera memory card.

Contents of uEnv.txt:


The camera seems to be focusing on the building across the street instead of West Seattle.

View from 1200 Western Ave, 13th Floor Elevator Room

1200 Western Ave, 13th Floor Elevator Room