Roku Remapping Useless Channels

23 Oct 2016 03:44

Does your Roku remote have a useless Rdio button? Rdio has closed, but you cannot remap your remote buttons. Or can you?

IMG_20161022_204713.jpg

You cannot really remap the keys, but what you can do is check what channel is currently displayed on the TV and launch another one instead. Unfortunately, there's no pub/sub mechanism, so you need to poll.

Here's te whole code:

#!/bin/bash

PLAYSTATION=tvinput.hdmi3
ROKU_MEDIA_PLAYER=2213
YOUTUBE=837

while sleep 1; do
  curApp="`curl -s http://roku:8060/query/active-app | grep -o '>.*</app>'`"
  case "$curApp" in
    *VUDU*)
      curl -X POST http://roku:8060/launch/$ROKU_MEDIA_PLAYER
      ;;
    *Netflix*)
      curl -X POST http://roku:8060/launch/$YOUTUBE
      ;;
    *Rdio*)
      curl -X POST http://roku:8060/launch/$PLAYSTATION
      ;;
  esac
done

As you can see starting VUDU channel opens the Roku Media Player, starting Netflix opens YouTube, and RIP Rdio is remapped to to HDMI3, which is the HDMI port I used for my PlayStation.

To find out what IDs different channels have, just call:

curl http://roku:8060/query/apps

(Note I'm using http://roku:8060/ in this script, that's because I run a local DNS server in my local network and my Roku TV has a fixed IP that "roku" name resolves to.)

Comments: 0

Monitoring Freezer Temperature

25 Sep 2016 22:11

For some time we've had suspicions the freezer in our apartment doesn't work correctly. Recently my wife reached to it to get some ice cubes but found water in the form instead. That clearly meant the temperature was over the freezing point and stayed there for some time. She put a thermometer on the fridge, put the sensor into the freezer and started watching the readings. The temperature reached around -20°C and stayed there, fluctuating between -15°C and -20°C.

The webcam reader

I decided, it's time to record the temperature, so I brought the laptop, adjusted the screen angle, set the brightness to max and took a shot from the webcam to see if the thermometer is in the view of the camera:

mplayer tv:// -vo png -frames 3

This takes 3 frames from the webcam and saves them as PNG files: 00000001.png, 00000002.png, and 00000003.png. If you wonder why I need 3 frames, the first one is always under- or overexposed and out of focus, the second is usually OK and the third is almost always good (as for a webcam), so long story short, I'm giving some time to the camera to auto-adjust its settings.

Here's a script that does just a bit more than that:

#!/bin/bash

set -e

export DISPLAY=:0

curdate=`date +%x-%X | sed 's/[\.:]//g'`
outpng=/home/quake/git/thermonitor/out/"$curdate".png
tmppng=/home/quake/git/thermonitor/tmp/00000005.png

mkdir -p /home/quake/git/thermonitor/out/ /home/quake/git/thermonitor/tmp
cd /home/quake/git/thermonitor/tmp

# White background
ristretto ../white.png &

# Wake screen
xte "mousemove 100 100"
sleep 0.2
xte "mousemove 99 99"

# Dump screen
mplayer tv:// -vo png -frames 5 -noconsolecontrols
cp "$tmppng" "$outpng"
echo "$outpng"

kill $!

It's very hacky, but here are the big parts:

  • set -e makes the script stop if any of the commands in it fail
  • export DISPLAY=:0 selects the default X display, so I can run the command from an SSH shell
  • Then we have some hard-coded paths to tmp and out directories
  • ristretto ./white.png & starts a image-viewer and shows a white.png file which is a big all-white file. This makes the screen display enough white, so the LCD thermometer is properly lit
  • I use xte to move the mouse around: this blocks the automatic screen dimming, so the screen continues to be 100% bright
  • Then we have the mplayer command from before, just changed to 5 frames (just to be sure) and -noconsolecontrols. mplayer kind of hangs when you start it without a proper terminal on stdin, unless you pass this option
  • Then I copy the 5th frame to the output file, which has the current date and the time in its path
  • Finally I close the ristretto process I opened before

I run this command in a while loop like that:

while sleep 28 ; do  ./dup.sh ; done

./dup.sh takes around 2 seconds, so I have about 2 photos a minute.

I put that to a screen session.

Now in the out directory I serve the files using Python HTTP server:

python -m SimpleHTTPServer

This starts an HTTP server on port 8000 and serves all the files and creates an index of them when requesting /.

The processing

How easier to copy the HTTP-exposed files than using a plain-old wget command?

wget -r http://dell:8000/

This creates directory dell:8000 and downloads all the PNG files to it.

Once you have that directory you may later want to update it with only the newer files:

rm dell\:8000/index.html
wget -nc -r http://dell:8000/

The -nc switch makes wget ignore files that are already there. We explicitly remove the index.html so it downloads a new list (that includes newer files).

I wanted to make a program that gets a photo of the 7-segment display and reads it producing a number that along with the date could be used to graph the temperature over time.

First, let's use ImageMagick to:

  • crop the picture to only the interesting part: the OUT reading
  • shear the picture to make the LCD segments vertical and horizontal, not skewed
  • bump contrast/gamma to remove the noise
  • auto-adjust the visual properties of the picture so it's more consistent across different times (like night vs day)
  • make it black and white — turned out it works best if I only use the Green channel, R and B turned out more noisy than G.
  • resize to as small size as needed — for easier and faster processing

Here's the actual command:

convert $file_in -gamma 1.5 -auto-level -brightness-contrast 35x80 -shear -14x0 -crop 260x70+320+227 -channel Green -separate -resize 40x20 $file_out

Input image:

input.png

Output image:

output.png

Zoomed in:

output-scaled.png

You can see the bottom segment is not really visible, but that's OK, all the digits can be properly read without that segment being visible, so we'll only consider the 6 segments that we can easily read.

Now comes the meat, the Python program reading the file and returning the temperature:

#!/usr/bin/env python
#encoding: utf-8
 
from PIL import Image
 
# Points that hold each segment:
# d1 is the first digit
d1_sA = [(12,1), (13,1), (14,1), (15,1)]
d1_sB = [(18,2), (18,3), (18,4), (18,5)]
d1_sC = [(18,8), (18,9)]
d1_sE = [(11,8), (12,9), (11,9), (12,8)]
# Don't need this, as we only ever have 1 and 2 in the first place
d1_sF = []
d1_sG = [(13,7), (14,7), (15,7), (16,7)]
 
# d2 is the second digit
d2_sA = [(24,1), (25,1), (26,1), (27,1)]
d2_sB = [(29,2), (29,3), (29,4), (29,5)]
d2_sC = [(29,8), (29,9)]
d2_sE = [(22,8), (22,9)]
d2_sF = [(22,2), (22,3), (22,4), (22,5)]
d2_sG = [(24,6), (25,6), (26,6), (24,7), (25,7), (26,7)]
 
# d3 is the small digit (first after the decimal point)
d3_sA = [(35,4), (36,4), (37,4)]
d3_sB = [(38,5), (38,6), (38,7)]
d3_sC = [(38,9)]
d3_sE = [(34,9)]
d3_sF = [(34,5), (34,6), (34,7)]
d3_sG = [(35,8), (36,8)]
 
# Now the tricky part, for each segment I define a threshold below which I consider it "lit"
# 0 means completely black, 255 is white.
# Because of uneven lighting, for each segment (and digit, but we ignore that) the value is different.
tA = 200
tB = 170
tC = 120
tE = 115
tF = 140
tG = 170
 
# A threshold for the "-" sign
tSIGN = 200
# All of those were obviously updated on the go to match the files
 
# A nice debugging function that prints which segments the code considers lit
# Also if you wondered what A, B, C, E, F, G meant, here's the schematic:
 
def print_digit(segs):
    # Only print this if there's a second argument to the script passed
    if len(sys.argv) == 2:
        return
    print '''
                                                                                   {A}{A}
                                                                                   {A}{A}
                                                                                {F}      {B}
                                                                                {F}      {B}
                                                                                {F}      {B}
                                                                                   {G}{G}
                                                                                   {G}{G}
                                                                                {E}      {C}
                                                                                {E}      {C}
                                                                                {E}      {C}
'''.format(
    A='###' if 'A' in segs else '   ',
    B='###' if 'B' in segs else '   ',
    C='###' if 'C' in segs else '   ',
    E='###' if 'E' in segs else '   ',
    F='###' if 'F' in segs else '   ',
    G='###' if 'G' in segs else '   ',
    )
 
# This doesn't do anything spectacular, just passes the coordinates for each of the segments and the average value of the first 9 pixels of the image: (0,0) - (3,3)
 
def read_digit(im, pointsA, pointsB, pointsC, pointsE, pointsF, pointsG, avg9px):
     segs = ''
     segs += 'A' if read_segment(im, pointsA, tA-avg9px) else ''
     segs += 'B' if read_segment(im, pointsB, tB-avg9px) else ''
     segs += 'C' if read_segment(im, pointsC, tC-avg9px) else ''
     segs += 'E' if read_segment(im, pointsE, tE-avg9px) else ''
     segs += 'F' if read_segment(im, pointsF, tF-avg9px) else ''
     segs += 'G' if read_segment(im, pointsG, tG-avg9px) else ''
 
     print_digit(segs)
 
# A list of all digits and their representation on the 7-segment display:
 
     if segs == 'ABCEF':
         return 0
     if segs == 'BC':
         return 1
     if segs == 'ABEG':
         return 2
     if segs == 'ABCG':
         return 3
     if segs == 'BCFG':
         return 4
     if segs == 'ACFG':
         return 5
     if segs == 'ACEFG':
         return 6
     if segs == 'ABC':
         return 7
     if segs == 'ABCEFG':
         return 8
     if segs == 'ABCFG':
         return 9
 
# A special case for the first digit, it doesn't display "0", just doesn't display any segments
     if segs == '':
         return 0
 
# The function that takes the PIL image object, gets a list of (x,y) coordinates
# and checks if the average value of them is smaller than the threshold passed: segment "on"
# For 0 points passed it returns False: segment "off"
def read_segment(im, points, threshold=128):
    val = 0
    for point in points:
        val += im.getpixel(point)
    return val < threshold * len(points)
 
# Nothing interesting in here, just for printing the date from the file name
def get_date(file_name):
    time_str = file_name.split('-')[-1].replace('.png', '')
    d1, d2, m1, m2, y1, y2, y3, y4 = file_name.split('/')[-1].split('-')[0]
    return '{}{}/{}{}/{}{}{}{} '.format(m1, m2, d1, d2, y1, y2, y3, y4) + '{}{}:{}{}:{}{}'.format(*list(time_str))
 
# A bunch of imports in the middle of the file
# Don't do that at home ;-)
import sys
import subprocess
 
# This will be for example: pngs/dell\:8000/24092016-101055.png
file_in = sys.argv[1]
 
# And this: processed-pngs/dell\:8000/24092016-101055.png
file_out = 'processed-' + file_in
 
# Calling the ImageMagick as discussed in the article
subprocess.check_call(['convert', file_in, '-gamma', '1.5', '-auto-level', '-brightness-contrast', '35x80', '-shear', '-14x0', '-crop', '260x70+320+227', '-channel', 'Green', '-separate', '-resize', '40x20', file_out])
 
# Reading what it created
im=Image.open(file_out)
 
# Now a thing I added at some point later.
# Because of different lighting throughout the day and because the ImageMagick command
# above was not good compensating for it (in spite of auto-level and high contrast)
# some of the images were darker than the others. In most images (the ideal scenario for the code)
# the first 9 pixels of the image (0,0) to (3,3) were just white (or very close), but in those darker
# images the whole image was darker and I used the first 9 pixels to detect how much darker
first9px = im.getpixel((0,0)) + im.getpixel((0,1)) + im.getpixel((0,2)) \
         + im.getpixel((1,0)) + im.getpixel((1,1)) + im.getpixel((1,2)) \
         + im.getpixel((2,0)) + im.getpixel((2,1)) + im.getpixel((2,2))
 
# This is the compensation, for most of the images it's 0 or very few, but for darker images, it's more
avg9px = 255-first9px/9
 
num = '{}{}{}.{}'.format(
    '-' if read_segment(im, [(2,6), (3,6), (4,6)], tSIGN-avg9px) else '+',
    read_digit(im, d1_sA, d1_sB, d1_sC, d1_sE, d1_sF, d1_sG, avg9px),
    read_digit(im, d2_sA, d2_sB, d2_sC, d2_sE, d2_sF, d2_sG, avg9px),
    read_digit(im, d3_sA, d3_sB, d3_sC, d3_sE, d3_sF, d3_sG, avg9px),
)
 
if 'None' not in num:
    print get_date(file_in) + '\t' + num
 
# If there's a second argument to the script passed, show the original image for comparison
if len(sys.argv) > 2:
    Image.open(file_in).show()

Even though this script is so simple, after tweaking the thresholds, most of the files were recognized correctly, those that weren't had one or more non-recognized digits, so they were easy to filter out. For over 1-day worth of images, only 2 or 3 minutes had a missing reading.

I loaded the data to LibreOffice and generated this pretty graph:

graph.jpg

Timelapse video

Another approach visualizing the data was to create a video.

The plan:

  • Annotate the images with the recorded time
  • Compose the video from single frames, putting 60 frames per a second of the resulting video

60 frames a second with roughly 2 frames captured a minutes means a day of recording is compressed to:

2 frames a minute * 60 minutes an hour * 24 hours a day = 2880 frames
2880 frames / 60 frames a second = 48 seconds

This makes is "viewable". 60 FPS (versus 30 FPS at higher speed) means you can pause at any time and read the crisp time and temperature.

Here's the annotate part:

#!/usr/bin/env python
 
import sys
import subprocess
 
path_in = sys.argv[1]
path_out = sys.argv[2]
date, time = path_in.replace('.png', '').split('/')[-1].split('-')
 
D1, D2, M1, M2, Y1, Y2, Y3, Y4 = date
h1, h2, m1, m2, s1, s2 = time
 
label = '{}{}/{}{}/{}{}{}{} {}{}:{}{}:{}{}\\n'.format(M1, M2, D1, D2, Y1, Y2, Y3, Y4, h1, h2, m1, m2, s1, s2)
 
subprocess.check_call([
    'convert', path_in,
    '-gravity', 'south',
    '-pointsize', '45',
    '-font', 'FreeMono',
    '-annotate', '0', label,
    '-fill', 'black',
    path_out
    ])

Here's the result:

annotated.png

Running this in a loop, 16 images at a time:

#!/bin/bash

i=0
for file_in in dell*/*.png; do
  file_out="`printf "labeled/%06d.png" $i`"
  echo ./add-labels.py "$file_in" "$file_out"
  i=$((i+1))
done | parallel -j 16

In addition to annotatig the image it also names them as 000001.png, 000002.png etc, which makes it easy for avconv to convert them to a video:

avconv -fflags +genpts -r 60 -i labeled/%06d.png -r 60 temperature.mkv

And here's the video:

Comments: 0

Pushing a HTTP Stream To Roku

18 Sep 2016 23:11

After a bit of traffic capturing, I managed to find out how to make Roku play an HTTP stream by URL.

It seems a Roku device (Roku TV in my case) hosts an HTTP server which is exposed on port 8060.

There's a series of MDNS requests to determine the available player in the network, but we can skip that since I assume you know the IP address of your Roku. After the device (a phone in my case) identified the target player, it sends an HTTP request:

GET http://your-roku-address:8060/query/device-info

You get a nice XML document in return that lists some properties like this:

<device-info>
   <udn>...</udn>
   <serial-number>...</serial-number>
   <device-id>...</device-id>
   <advertising-id>...</advertising-id>
   <vendor-name>Hisense</vendor-name>
   <model-name>Hisense 40H4</model-name>
   <model-number>5203X</model-number>
   <model-region>US</model-region>
   <screen-size>40</screen-size>
   <wifi-mac>...</wifi-mac>
   <network-type>wifi</network-type>
   <user-device-name/>
   <software-version>7.2.0</software-version>
   <software-build>4143</software-build>
   <secure-device>true</secure-device>
   <language>en</language>
   <country>US</country>
   <locale>en_US</locale>
   <time-zone>US/Pacific</time-zone>
   <time-zone-offset>-420</time-zone-offset>
   <power-mode>PowerOn</power-mode>
   <supports-suspend>true</supports-suspend>
   <developer-enabled>true</developer-enabled>
   <keyed-developer-id/>
   <search-enabled>true</search-enabled>
   <voice-search-enabled>true</voice-search-enabled>
   <notifications-enabled>true</notifications-enabled>
   <notifications-first-use>false</notifications-first-use>
   <headphones-connected>true</headphones-connected>
   <expert-pq-enabled>0.5</expert-pq-enabled>
</device-info>

You can see for example my device is Hisense 40H4 and it's a secure device.

This is interesting, but this doesn't cause anything to play.

Here's the part that gets Roku to play something:

POST http://your-roku-address:8060/input/15985?t=v&u=http%3A%2F%2F192.168.1.108%3A8000%2Fstream.m3u8&k=(null)&videoName=192.168.1.108%3A8000%2Fstream.m3u8&videoFormat=hls

That's it, you pass the m3u8 URI as u param (need to URL encode it first) and Roku starts playing it. Note the HTTP method used in POST, but the payload is empty.

I'm not sure what's the meaning of 15985 in the URL, maybe it's the ID of the PlayOnRoku video source.

Comments: 3

My Computer Setup: Hardware

22 Jun 2016 01:40

I believe you should use the tools that suit you. As a software developer, my computer is my toolbox. And I wanted to share what it is.

While writing this post I realized how much I want not only to list my setup but also explain what are the reasons for each element of it.

This makes it not suitable for one post. That's why it's going to be a few separate blog posts on each of the parts.

Today we start with…

Hardware

My current computer is a one supplied by my company — Wikia. It's Dell Latitude E6430s. Previously I also used an older version of Dell Latitude, both are very similar-looking and built around the same platform.

dell-latitude-e6420_g.jpg

(My previous laptop)

dell_e6430sHH.jpg

(My current one)

Intel-based

The computer is based on Intel hardware: Intel CPU, Intel Ethernet and Wi-Fi cards, Intel graphics. The hardware is very nicely supported by Linux, which will be covered in the next post.

Docking station

I'm a big fan of docking stations: you just put your laptop on it and you don't need to play with any cables whatsoever. The laptop gets power from it and you can connect a few devices through it all by placing the laptop in the correct spot.

Screens

I used to have two external screens, but now I only have one — a bigger one, though. The main difference between my old and new laptop from my standpoint was the number of simultaneously supported screens. Previously the graphic adapter could only support 2, the new model supports 3.

With the old model, it was very tricky to script the computer in a way that would switch from displaying the contents just on the laptop to displaying it on two external screens. You needed to do that in stages: first, activate one external screen, then disable the laptop screen, and finally activate the other screen. I did finally craft the script that worked, that it required some time.

With the new model, the regular tools for screen management (based on XRandr) works flawlessly whether you want to use internal, internal + 1 external, 2 external or internal + 2 external screens and whatever the transitions you want, it'll just work.

Keyboard

When I joined Wikia, my first project was rewriting the ad engine. That required a lot of coding and my hands started to be in pain. I asked for an ergonomic keyboard and got one. The keyboard I used was a Microsoft Natural 4000. I'm not an expert on keyboards, but I must say my pain was gone and I never wanted to type on a non-ergonomic keyboard again.

ms-natural-4000-keyboard.jpg

When I moved to the States, my keyboard was left in Poland, so I asked for another ergonomic keyboard and got a Microsoft keyboard again. This time, it was a different model: Microsoft L5V-00001. The keyboard took some time to get used to, but as before my hands don't hurt.

Microsoft-L5V-00001.jpg

I want to say one thing here: I don't think ergonomic keyboard (or mouse for that matter) makes you type faster — your body can adjust to using almost any sensible tool at comparable speeds — but it does help you type without pain. As I spend a good part of the day typing, this is important.

Mouse

As for the mouse, almost anything goes, I used wired, Bluetooth and proprietary-wireless-USB mouses. Also the laptop touchpads. Dells are equipped with track point devices. I don't like using them. I like the fact you get a dedicated middle button with it, I use it way more often that most of the people.

One thing that I absolutely need my mouse-like device to give me is a sane scroll and the middle button I mentioned before. My Bluetooth mouse has the scroll broken — when scrolling down, it sometimes scrolls up — and this is very very annoying for me.

Summing up: I don't use the mouse that often, mostly to scroll content, so give me anything that allows me to scroll, and I'm happy.

Extras

The laptop has a removable big capacity battery, SD card reader, USB3, and eSATA. It features an extension bay which by default is occupied by a DVD-writer. I have a 500GB placed in there with the original Windows installation moved to it, so I can both boot to it and start it in VirtualBox.

The very annoying part of the laptop is the HDMI output, which is realized through a mini-HDMI port. The port is not very solid and after using it just a few times it stopped working (mechanically, it was later replaced by Dell support). You also need the adapter which you normally don't carry on you.

Seating arrangement

Again, because I spent a good part of the day working on a computer it is important for me to seat in a position that my body can handle. There are some resources online to help you arrange your seat, your desk, your keyboard and your screen in a way that is optimal, so I'm not going to go through that, but here's what I needed to do:

  • As the desks in the office don't offer height alignment I ordered a monitor stand and I'm having my keyboard sit on top of it. This makes up for the fact my desk is too low for me. I keep my mouse on it too. There's no place for my num pad, but that's OK.
  • With my keyboard already risen, you can expect my screen needs to be placed high as well. Even with quite wide height-alignment range of the screen, it's not enough, I placed the screen on a bunch of Amazon boxes.
  • What works one day, doesn't work on the other. Every few days I adjust the screen and the seat to be higher or lower just a tad, just to change the position a little bit and to not cause any Repetitive Strain Injuries (well, not really, but I guess you know what I mean).
monitor-stand.jpg

Home setup

Whenever I'm not at work and I use my laptop, I use it completely bare: no external screen, no ergonomic keyboard, no optimized seating position, usually no AC cord. The battery, when fully charged, allows for anywhere between 3 and 6 hours of work/play and that's more than enough I need to use my computer outside the office.

Next time

The next posts will be dedicated to the operating system.

Comments: 1

page 1 of 212next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License