Programming Games

May 18, 2020 · 2 min read

It's been a fantasy of mine for some time to envision and program a "real" game. I've made a couple of games in the past including a Mario clone, Zario, for the TI-85 graphing calculator, and a whack-a-mole Flash game for a kids TV show, Taylor's Attic.

Mouse Attack

Lately, my 7-year-old daughter has been making games in Hopscotch, and we even built the beginnings of one that she designed, Word Names, in JavaScript. Note: If you load it on a phone or iPad, the letters are draggable. She lost interest pretty quickly in JavaScript, however, and has also hit a lot of the limitations of Hopscotch.

With the playdate device coming later this year, I've been inspired to dive a little deeper into learning how to create 2D games in hopes that I can have one ready when the device is released. Playdate supports both Lua and C, and their Lua API seems to be coming along nicely. In fact, it looks like it will share some similarities to LÖVE, a popular gaming framework for Lua. So, in the last few weeks, I've set about learning a few basic principles of game design using Lua and LÖVE as a base. It's been fun to learn something new and if you're interested in doing the same, here are the resources I've been using:

I'm hoping there will be more to show in the coming months!

Load Android Fragments Asynchronously

February 9, 2020 · 4 min read

If you are like me, you may occasionally receive design requests to build a more complex screen in an Android app. So, you set about building the layout, making it look just like the design mocks, and everything works great, except for one small problem. Animating to that screen either stutters or hangs until the screen is loaded. You may have gone back to the layout and refactored everything to be in one ConstraintLayout as Google suggests doing, but it still takes a bit of time to inflate the view because it's so large. Another trick you may have tried if it's a long scrolling view is using a library like Epoxy to only load what is currently visible in the scrolled section. This can work in some cases, but it may cause stuttering or undesired visual artifacts while scrolling due to loading each view on the screen as it appears. It may also just be too much time and effort to refactor a complex view in this way.

Thankfully, Google gave us an option to inflate a view asynchronously. It's called, unsurprisingly, AsyncLayoutInflater and is available both in the v4 support library and is part of AndroidX. Until I discovered this, I thought that all UI (including inflating views) had to be run on the main thread. It turns out that entire layouts can actually be inflated on a background thread with very little change to existing code. There are a few caveats to be aware of before going down this route, but this method works for many different situations.

Caveats

  1. AsyncLayoutInflater does not support inflating layouts that contain fragment. However, a layout can be inflated within a fragment.
  2. The layout's parent must have a thread-safe implementation of generateLayoutParams(AttributeSet)
  3. All the views being constructed as part of inflation must not create any Handlers or otherwise call myLooper()
  4. When a layout is inflated with AsyncLayoutInflator it does not automatically get added to the parent (i.e. attachToRoot is false)

Some of these caveats may sound like a big deal, but in practice they are not typically an issue.

Converting an existing fragment

The first thing to note if you are converting an existing fragment to inflate asynchronously, is that you can't just swap out LayoutInflator for AsyncLayoutInflator. The reason being that onCreateView, where the layout inflation occurs, expects a view to be returned. The solution to both asynchronously inflating a layout for a fragment and returning a view from onCreateView is to inflate a small view that the larger view can be attached to. To start with, I created loading_view.xml with a ConstraintLayout and a ProgressBar so there isn't just an empty screen while loading the rest of the content:

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    xmlns:app="http://schemas.android.com/apk/res-auto">

    <ProgressBar
        android:id="@+id/progressBar"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_gravity="center"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

All that is left then is to inflate this view, asynchronously inflate the large layout and attach it to the loading_view. We should also hide the ProgressBar once the large layout has been loaded.

override fun onCreateView(
    inflater: LayoutInflater,
    container: ViewGroup?,
    savedInstanceState: Bundle?
): View? {
    super.onCreateView(inflater, container, savedInstanceState)

    // get view binding
    val v = inflater.inflate(R.layout.loading_view, container, false)
    val asyncLayoutInflater = context?.let { AsyncLayoutInflater(it) }
    asyncLayoutInflater?.inflate(R.layout.large_layout, null) { view, resid, parent ->
        (v as? ViewGroup)?.addView(view) // add large view to already inflated view
        progressBar.visibility = View.GONE // hide progress bar
    }

    return v
}

And, there you have it. A way to load a fragment's layout asynchronously.

Imagining a Different Input Device

November 27, 2019 · 4 min read

Apple has been experimenting lately with different types of input devices. In 2016, they tried the butterfly laptop keyboards which were a huge step backward in reliability and a teeny hop forward in "key stability". After 3 years of painful reliability issues, they went back to a well tested scissor design in their new 16" laptop. 2016 was also the year that Apple came out with their first touchscreen Mac; the touch bar MacBook Pro.

I guess the question is, what is the problem that a new input device would attempt to solve. Key stability is a very minor problem and could be argued it's really not a problem at all. I like the thinking that the touch bar could be more intuitive than mousing around menus, but in practice it's just impractical. It's hard to reach and hides previously hard-wired functionality like volume and brightness controls. So what's wrong with the standard keyboard and mouse that we have had for years? I don't think there is anything particularly bad about these input devices. They get us pretty much all the functionality we need on a computer and they are pretty ubiquitous. There is a learning curve to touch typing, but many schools teach it at a young age now. It's really not much more difficult to learn than driving a car. And the mouse, or alternatively trackpad? It's pretty intuitive to pick up, but may not be as efficient as the keyboard in a lot of cases. Then there are touchscreens. They are probably the most intuitive but quite imprecise. So we have:

  1. Keyboard: Fast and precise, but high learning curve and often leads to RSI
  2. Mouse: Intuitive and mostly precise, but limited input capability and often leads to RSI
  3. Touchscreen: Very intuitive, but limited input capability and not precise

If I were to imagine a new type of input device, I would start with areas that could be improved with our existing forms of input. It wouldn't be unreasonable to say that the areas of ergonomics, speed of input, intuitiveness, and precision of input would all be areas to consider for improvement. We've tried different forms of voice input to improve on ergonomics, but those are slow and imprecise. It's reasonable to assume that touch input of some sort has the most legs, so to speak. We can feel feedback with our fingers and be pretty precise with our wrists and arms if the input device allows us to be (keyboard and mouse for example). The big challenge with touch is ergonomics. The more precise we make the input, the more strain it puts on our bodies. The more functionality that is added to touch, the more the learning curve as well.

Let's take touch out of the mix and think about our other senses. We have taste, hearing, smell, and sight. Is there anything we can do with these other senses that would improve ergonomics, precision, speed, and is also intuitive? Taste and smell don't seem to offer much, but I think that hearing and sight have some interesting potential. I've been thinking about the Airpods Pro for instance. They are great for sound output and also use the microphone for input and sound filtering. What if they could also pick up on brain activity? I can imagine a flow something like: these things we stick in our ears monitor brain activity for a stream of "text" input. The text then goes through a filter very much like what we use today for speech-to-text (i.e. composing a message) and speech-to-command (i.e. the various "ladies in a tube"). This of course leads to imprecise input as we're playing a really bad game of telephone at this point. So, what if we brought back in touch and sight at this point. Imagine a real-time display of what is coming from the pipeline (mind reading Airpods to speech-to-text/command) with suggestions for quick adjustments and corrections on the fly. This could hit on all key areas of improvement - speed, intuitiveness, precision, and ergonomics. As the "mind reading" part of this is a bit far fetched at this point, I wonder what it would be like to replace that with speech. It's disruptive in situations with other people involved, but could be an interesting proof of concept to try right now.

Cloud Backups for a Synology

January 5, 2019 · 5 min read

I have a Synology DS415+ that performs wonderfully as a NAS. It's used mostly as a place for shared content, Time Machine backups and redundancy of video data. It's great to have 16TB of available network storage, but there are limited options for a cost-effective cloud backup of multiple terabytes of data.

A couple of years ago, there was one decent option for backing up Synology data to the cloud. Crashplan had an unlimited personal plan for less than $5/month and their client could run directly on the Synology. The downside to Crashplan was that every time the Synology OS would be updated there was risk of the client breaking and needing to be re-installed and/or re-configured. For the price and the self-contained solution, this was an acceptable tradeoff, at least for me. In 2017, Crashplan did away with their personal plan, so this route is no longer an option.

Over the last year, I've tried some other options (like hubiC) for larger inexpensive storage. They offer 10TB for €50/year. I didn't find their control panel or service to be consistently reliable, so I've been on the search for other solutions and have two that work decently well. They aren't without some tradeoffs, but if you are looking for inexpensive, unlimited cloud backup, these are two decent options.

Backblaze over iSCSI

Backblaze offers an unlimited personal backup plan for $5/month that will back up any local drives from a computer. Their client only runs on a Mac or PC, but is written natively for the platform (unlike Crashplan which had a memory-intensive Java client). So, this option requires a dedicated computer to run the Backblaze client and the Synology has to be mounted as a local drive as Backblaze does not support network shares. Remember, I said there were tradeoffs.

A Mac Mini is perfect for this task. It's small and quiet and can easily be run headless. To set up an iSCSI target on the Synology, open up the iSCSI Manager (from the main menu), click on Target and create a new target and LUN. The wizard will walk you through the creation of both and will have you create a LUN of a size of your choosing. Note that this will be a new drive on your Mac or PC that will be formatted by the operating system. Any data you wish to have backed up will need to be moved or copied to this drive. Again, tradeoffs.

macOS does not come with an iSCSI initiator which is required to mount the iSCSI target as a local drive. The options out there are pretty bleak. Most people recommend globalSAN ($89) or Xtend SAN($249), both of which are pricey options (remember, we are trying to keep this solution cost-effective). A third option I came across was KernSafe, which has a free pricing tier for personal use. This seemed promising from a cost perspective, but I could not get it to work (at least on Mojave). After trying and failing with KernSafe, I decided to give the open source iSCSIInitiator a try. Before installing, you will need to disable unsigned kernel extensions protection by running csrutil disable. I also had a lot of trouble with the downloadable binary on Mojave. I would get segfaults nearly every time I ran the tool (about 1 in 10 tries would give me a successful run). So, you do need to build the tool and install from source which is not a difficult task as the shell scripts work very well (just make sure you have Xcode and tooling installed). This is a command line only tool, so once it's installed, here is a quick guide to getting it up and running:

> iscsictl add target <target_IQN>,<synology_IP>:3260
> sudo iscsictl modify target-config <target_IQN> -auto-login enable
> iscsictl login <target_IQN>

After running the login command, you should see a prompt that an external drive has been connected. Open DiskManager and format the drive to the file system of your choosing. Add the data to this drive that you wish to back up, install Backblaze, and start backing up to the cloud.

Google Drive Sync

A second option is to sync all your data to Google Drive using Synology's Cloud Sync app. You can get unlimited data on Google Drive for $10/month if you sign up for G Suite business plan. The admin user on this account will get unlimited storage on Google Drive which is perfect for syncing a large set of data from a Synology. This has none of the tradeoffs of Backblaze over iSCSI; the need for a dedicated computer, a complicated setup, and a separate drive that can't be easily shared on the network like other NAS shares. However, the big tradeoff of Google Cloud sync is that it is only syncing your data (you can choose if this is bi-directional or one-way sync) and not creating backup sets like Backblaze would do. The Cloud Sync app will also hog all of the upload bandwidth where Backblaze will throttle the upload. To alleviate this, I set the sync schedule of Cloud Sync to only run at night. This method also costs $5 more per month, but does not require the upfront cost of a dedicated computer or the cost of running and maintaining one.

Because each of these options has their tradeoffs, for now, I run both of them to get the best of both worlds. I was already paying for a G Suite business plan and already had a Mac Mini, so there was little additional cost. It also has the benefit of multiple backups being created which is far better than just one backup.

A Small Rust API with Actix

May 10, 2018 · 6 min read

I had the need for a very small API for this website. So small, in fact, that only one endpoint was required. I've been doing a lot of development in Rust lately, so naturally Rust seemed like a good candidate to build this API in. I also wanted to try out a newer Rust web framework called Actix web. It claims to be a "small, pragmatic, and extremely fast rust web framework", which sounded perfect for what I needed.

Getting started with Actix web is pretty straightforward. First create a new Rust project

cargo new my_api --bin

Cargo (the Rust package manager) is installed along with the popular Rust installer, Rustup. Adding Actix web server to your project can be done by first adding dependencies to Cargo.toml:

[dependencies]
actix = "0.5"
actix-web = "0.6"

and then starting a server in main:

extern crate actix_web;
use actix_web::{server, App, HttpRequest};

fn index(req: HttpRequest) -> &'static str {
    "Hello world!"
}

fn main() {
    server::new(
        || App::new()
            .resource("/", |r| r.f(index)))
        .bind("127.0.0.1:8088").expect("Can not bind to 127.0.0.1:8088")
        .run();
}

The Actix web quickstart guide gives a pretty good overview of getting started with Actix web.

The functionality I wanted for this particular API was to return some stats about my running for the year to display on this website. In order to get that data, I needed to make a couple of GET requests to Running Ahead, parse that data and return a JSON structure showing the total mileage run for the year and mileage from my 5 most recent runs.

{
  "year": "422",
  "latest": [
    "6.9",
    "7.78",
    "6.98",
    "7.71",
    "6.96"
  ]
}

The first thing to do was to figure out how to do the GET requests. Actix (the underlying Actor framework for Actix web) has a ClientRequest struct that allows you to make standard HTTP requests. I used ClientRequest to fetch a page from Running Ahead and return a Boxed Future which parses the resulting content into Vec of String.

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

Note that I'm using str::from_utf8 to convert the body that is returned into a String that can be matched in a regular expression.

The request to get the total mileage for the year is very similar.

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

Remember that these functions both return Futures as we want to make the requests simultaneously and combine the results when they have both returned. In order to do this, the calls can be chained together and combined in an endpoint like so:

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

All of this got me most of the way to where I needed to be. However, since the calls to Running Ahead are https, SSL needs to be enabled for the Actix dependency. This can be done by adding the alpn feature to Actix:

[dependencies]
actix-web = { version="0.6", features=["alpn"] }

Once I had alpn enabled, all worked well on my local (macOS) machine. However, when I went to deploy to a Linux server with an nginx process to provide SSL, I was met with a strange error message:

Error occured during request handling: Failed to connect to host: OpenSSL error: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1245:

Very strange. After much Googleing, I found a reference that suggested trying openssl-probe. This tool searches out locations of SSL certificates on the system and was exactly what I needed. Using openssl-probe requires adding the dependency to Cargo.toml

[dependencies]
openssl-probe = "0.1.2"

and adding adding this to the src/main.rs

extern crate openssl_probe;

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    //... your code
}

Here is the final Cargo.toml

[package]
name = "stevezeidner-api"
version = "0.1.0"
authors = ["Steve Zeidner <steve@stevezeidner.com>"]

[dependencies]
futures = "0.1"
env_logger = "0.5"
actix = "0.5"
actix-web = { version="0.5", features=["alpn"] }
openssl-probe = "0.1.2"

serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
json = "*"
regex = "0.2"

and source for src/main.rs

#![allow(unused_variables)]
#![cfg_attr(feature = "cargo-clippy", allow(needless_pass_by_value))]

extern crate actix;
extern crate actix_web;
extern crate env_logger;
extern crate futures;
extern crate json;
extern crate openssl_probe;
extern crate regex;
#[macro_use]
extern crate serde_derive;
extern crate serde_json;

use actix_web::{App, AsyncResponder, error, Error, fs,
                HttpMessage, HttpRequest, HttpResponse, pred, Result, server};
use actix_web::{client, middleware};
use actix_web::http::{Method, StatusCode};
use futures::{Future, future::ok as fut_ok};
use regex::Regex;
use std::{env, io};
use std::str;

#[derive(Debug, Deserialize, Serialize)]
struct MilesData {
    year: String,
    latest: Vec<String>,
}

/// Get total miles for the year
fn get_year() -> Box<Future<Item=String, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/last")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new("(?s)<th>Year:</th><td>(.*?) mi</td>").unwrap();
                        let mat = re.captures(str::from_utf8(&body).unwrap()).unwrap();
                        fut_ok(mat[1].to_string())
                    })
            ),
    )
}

/// get mileage for 5 latest runs
fn get_latest() -> Box<Future<Item=Vec<String>, Error=Error>> {
    Box::new(
        client::ClientRequest::get("https://www.runningahead.com/scripts/<my_user_id>/latest")
            .finish().unwrap()
            .send()
            .map_err(Error::from)
            .and_then(
                |resp| resp.body()
                    .from_err()
                    .and_then(|body| {
                        let re = Regex::new(">([0-9]*?.[0-9]*?|[0-9]*?) mi").unwrap();
                        fut_ok(re.captures_iter(str::from_utf8(&body).unwrap())
                            .into_iter()
                            .map(|item| {
                                item[1].to_string()
                            })
                            .collect())
                    })
            ),
    )
}

fn running(req: HttpRequest) -> Box<Future<Item=HttpResponse, Error=Error>> {
    get_year()
        .and_then(|miles_year| {
            get_latest().and_then(|miles_latest| {
                Ok(HttpResponse::Ok()
                    .content_type("application/json")
                    .body(serde_json::to_string(&MilesData {
                        year: miles_year,
                        latest: miles_latest,
                    }).unwrap()).into())
            })
        }).responder()
}

/// 404 handler
fn p404(req: HttpRequest) -> Result<fs::NamedFile> {
    Ok(fs::NamedFile::open("static/404.html")?
        .set_status_code(StatusCode::NOT_FOUND))
}

fn main() {
    openssl_probe::init_ssl_cert_env_vars();
    env::set_var("RUST_LOG", "actix_web=debug");
    env::set_var("RUST_BACKTRACE", "1");
    env_logger::init();
    let sys = actix::System::new("stevezeidner-api");

    let addr = server::new(
        || App::new()
            // enable logger
            .middleware(middleware::Logger::default())
            .middleware(middleware::DefaultHeaders::new()
                    .header("Access-Control-Allow-Origin", "*"))
            .resource("/running", |r| r.method(Method::GET).a(running))
            .resource("/error", |r| r.f(|req| {
                error::InternalError::new(
                    io::Error::new(io::ErrorKind::Other, "test"), StatusCode::INTERNAL_SERVER_ERROR)
            }))
            // default
            .default_resource(|r| {
                // 404 for GET request
                r.method(Method::GET).f(p404);

                // all requests that are not `GET`
                r.route().filter(pred::Not(pred::Get())).f(
                    |req| HttpResponse::MethodNotAllowed());
            }))

        .bind("127.0.0.1:8888").expect("Can not bind to 127.0.0.1:8888")
        .shutdown_timeout(0)    // <- Set shutdown timeout to 0 seconds (default 60s)
        .start();

    println!("Starting http server: 127.0.0.1:8888");
    let _ = sys.run();
}