If for some reason you can’t run wp db query, but need to export SQL output to a CSV or other file, then have a look at this small WP-CLI command I whipped up that should allow this:
/**
* Runs a SQL query against the site database.
*
* ## OPTIONS
*
*
* : SQL Query to run.
*
* [--format=]
* : Render output in a particular format.
* ---
* default: table
* options:
* - table
* - csv
* - json
* - count
* - yaml
* ---
*
* [--dry-run=]
* : Performa a dry run
*
* @subcommand sql
*/publicfunctionsql( $args, $assoc_args ){
global $wpdb;
$sql = $args[0];
$format = WP_CLI\Utils\get_flag_value( $assoc_args, 'format', 'table' );
$dry_run = WP_CLI\Utils\get_flag_value( $assoc_args, 'dry-run', 'true' );
// Just some precautions.if ( preg_match( '/[update|delete|drop|insert|create|alter|rename|truncate|replace]/i', $sql ) ) {
WP_CLI::error( 'Please do not modify the database with this command.' );
}
if ( 'false' !== $dry_run ) {
WP_CLI::log( WP_CLI::colorize( '%gDRY-RUN%n: <code>EXPLAIN</code> of the query is below: https://mariadb.com/kb/en/explain/' ) );
$sql = 'EXPLAIN ' . $sql;
}
// Fetch results from database.
$results = $wpdb->get_results( $sql, ARRAY_A ); // phpcs:ignore WordPress.DB// Output data.
WP_CLI\Utils\format_items( $format, $results, array_keys( $results[0] ) );
}Code language:PHP(php)
I’d add an example here, but I don’t have any right now that I can share π I’ll try to find one later (don’t hold your breath on me remembering to do that)
I used to run a MariaDB server on an old Linux machine for working with database dumps and other things, but after moving all of that to a new Raspberry Pi 4, I never really set it back up.
So instead I thought I’d play with this fancy new Docker stuff all the cool kids are talking about. I ended up getting phpMyAdmin and MariaDB working with Docker on my MacBook Pro.
Caveat:
To start up the MariaDB container thing, I ran this command:
One of the things that bothered me for quite some time with my Pi-Hole was that using it with Google Wifi (first gen), it wouldn’t automatically detect device hostnames. I’d done a lot of googling and never could get it to work even after a lot of different trials with multiple settings.
Eventually I gave up and instead wrote a command that would use nmap to help fill in the gaps, and output to /etc/pihole/custom.list:
#!/bin/bashif [ "$(id -u)" != "0" ]; then
echo"This script must be run as root"1>&2exit1
fi
echo -n "Looking up MAC and IPs"for ip in "192.168.1.0""192.168.2.0""192.168.3.0"; doecho -n .
# This is very slow.
nmap -sP "$ip"/20 > "$ip"-nmap.txt
done
echo# Mega-command to turn the nmap output into a CSV.
cat 192.168.?.0-nmap.txt \
| sed '/^Starting Nmap/ d' \
| sed '/^Host is up/ d' \
| sed '/^Stats:/ d' \
| sed '/^Ping Scan Timing:/ d' \
| sed '/^Nmap done:/ d' \
| sed -z 's/\nMAC/,MAC/g' \
| sed -e 's/Nmap scan report for //g' \
| sed -e 's/MAC Address: //g' \
| sed -e 's/ (/,(/g' \
| grep -Ev $'^[0-9.]+$' \
| sort -u > ip-mac-mapping.csv
rm /etc/pihole/custom.list2> /dev/nullwhile IFS=, read -r col1 col2 col3
do# Strip out opening and closing parenthesis.
col3="${col3//[\(\)]/}"# Replace all non-alphanumeric characters with dashes.
col3="${col3//[^[:alnum:]]/-}"# Manually name some of the MACs I already know.case"$col2" in
"24:05:88:XX:XX:XX")
col3="Google-Wifi"
;;
"B0:19:C6:XX:XX:XX")
col3="Derricks-iPhone"
;;
"CC:44:63:XX:XX:XX")
col3="iPad-Pro"
;;
"C8:D0:83:XX:XX:XX")
col3="Apple-TV-Den"
;;
"50:32:37:XX:XX:XX")
col3="Apple-TV-Bedroom"
;;
"DC:A6:32:XX:XX:XX")
col3="Ubuntu-Server"
;;
"38:F9:D3:XX:XX:XX")
col3="Derrick-MBP"
;;
*)
echo -n
;;
esac
# For some reason, this one is still funky, so I'm adding in a special case for it.# Could have just been weird caching during my testing.case"$col1" in
"192.168.1.1")
col3="Google-Wifi"
;;
*)
echo -n
;;
esac
# The PiHole custom.list is supposed to follow the hosts standard, but it seems that# it is not happy with tabs and comments :sadpanda:echo"$col1 $col3 # $col2"echo"$col1 $col3" >> /etc/pihole/custom.list
done < ip-mac-mapping.csvCode language:PHP(php)
This will attempt to grab some info about all the devices on your network via nmap, but also allow you to manually override that per IP or per MAC. I have of course stripped out some devices and semi-anonymized my MACs in the above example.
The nmap can be pretty slow, especially if you’re running this on a first gen Raspberry Pi like I am.
I was recently working on an issue that required me to dig through gigabytes of CSV files to look for patterns and data points. I had no intention of bringing all of this data in to Excel, because there’s not enough RAM on earth for that to work. Instead I thought it would be easier to dump the data into MariaDB (MySQL would be fine as well) and just write some simple SQL statements.
There were a few things I tried, such as LOADDATAINFILE and other SQL type things. Nothing really worked properly, so instead I decided to just convert the CSV files into SQL statements I could import.
This is a bad idea! Don’t ever let this touch a live production database!
I wrote this little PHP script to convert all of the CSV files in a directory to SQL:
There’s a lot of ways that this could be made better, but it works in a quick pinch. If you need something similar, feel free to take this as a starting point.
I recently came across the need to watch my disk space while running a slow program to make sure I didn’t run out. If I did, the command would fail silently and I’d have to start over again.
The important part here is disk1s5, which is the device file for the partition I wanted to watch. If you need to find this, it can be done simply by running the df as a whole:
In my forever attempt to learn and understand Gutenberg, React, and ES6 (which I am failing at horribly) I sometimes build WordPress plugins just for the learning experience. I don’t have any full intent on releasing these to the WordPress.org Plugin Repository because I honestly don’t feel like dealing with end users and support. I’m a horrible person, I know.
One that I’ve got in my barrel of cobwebs is Embeds for Twitch and Gutenberg
Recently I’ve been working with a client who’s site we’re going to soon be migrating. To help with any downtime, we’ve been looking at reducing their database size, which is something around 50-60 gigabytes. After looking through the database, one easy win would be to purge as many post revisions as possible, since their wp_posts table was about 50% revisions π±
We couldn’t just mass delete anything with a revision post type though, because the client had some specific needs:
Keep all revisions for posts from the last 12 months.
Keep all revisions that were made after the post was published.
Keep a backup of all revisions deleted.
To do this, I crafted a custom WP-CLI command to purge these revisions. I’ve accidentally deleted the final version of the script, since it was only a one-run thing, but here’s an earlier version that could be a good starting point for anyone else that has a similar need to prune revisions:
/**
* WP-CLI command that deletes pre-publish post revisions for posts older than 12 months.
*
* @subcommand prune-revisions [--live] [--verbose]
*/publicfunctionprune_revisions( $args, $assoc_args ){
global $wpdb;
$live = (bool) $assoc_args[ 'live' ];
$verbose = (bool) $assoc_args[ 'verbose' ];
$offset = 0;
$limit = 500;
$count = 0;
$deletes = 0;
if ( $live ) {
$output_file = sanitize_file_name( sprintf( 'prune-revisions-backup_%d_%d.csv', get_bloginfo( 'name' ), current_time( 'timestamp', true ) ) );
$handle = fopen( $output_file, 'wb' );
if ( false === $handle ) {
WP_CLI::error( sprintf( 'Error opening %s for writing!', $output_file ) );
}
// Headers.
$csv_headers = array(
'ID',
'post_author',
'post_date',
'post_date_gmt',
'post_content',
'post_title',
'post_excerpt',
'post_status',
'comment_status',
'ping_status',
'post_password',
'post_name',
'to_ping',
'pinged',
'post_modified',
'post_modified_gmt',
'post_content_filtered',
'post_parent',
'guid',
'menu_order',
'post_type',
'post_mime_type',
'comment_count',
'filter',
);
fputcsv(
$handle,
$csv_headers
);
}
$count_sql = 'SELECT COUNT(ID) FROM ' . $wpdb->posts . ' WHERE post_type = "revision"';
$revision_count = (int) $wpdb->get_row( $count_sql, ARRAY_N )[0];
$progress = \WP_CLI\Utils\make_progress_bar( sprintf( 'Checking %s revisions', number_format( $revision_count ) ), $revision_count );
do {
$sql = $wpdb->prepare( 'SELECT ID FROM ' . $wpdb->posts . ' WHERE post_type = "revision" LIMIT %d,%d', $offset, $limit );
$revisions = $wpdb->get_results( $sql );
foreach ( $revisions as $revision ) {
$count++;
$post_parent_id = wp_get_post_parent_id( $revision->ID );
// Fail on either 0 or false.if ( false == $post_parent_id ) {
WP_CLI::warning( sprintf( 'Revision %d does not have a parent! Skipping!', $revision->ID ) );
continue;
}
$revision_modified = get_post_modified_time( 'U', false, $revision->ID );
$parent_publish_time = get_post_time( 'U', false, $post_parent_id );
if ( $parent_publish_time < current_time( 'timestamp') - ( MONTH_IN_SECONDS * 12 ) ) {
// Post is older than 12 months, safe to delete pre-publish revisions.if ( $revision_modified < $parent_publish_time ) {
if ( $live ) {
// We're doing it live!
WP_CLI::log( sprintf( 'Deleting revision %d for post %d. (%d%% done)', $revision->ID, $post_parent_id, ( $count / $revision_count ) * 100 ) );
// Backup data!
$output = [];
$data = get_post( $revision->ID );
// Validate the field is set, just in case. IDK how it couldn't be.foreach ( $csv_headers as $field ) {
$output = isset( $data->$field ) ? $data->$field : '';
}
fputcsv( $handle, $output );
$did_delete = wp_delete_post_revision( $revision->ID );
// Something went wrong while deleting the revision?if ( false === $did_delete || is_wp_error( $did_delete ) ) {
WP_CLI::warning( sprintf( 'Revision %d for post %d DID NOT DELETE! wp_delete_post_revision returned:', $revision->ID, $post_parent_id ) );
}
$deletes++;
// Pause after lots of db modifications.if ( 0 === $deletes % 50 ) {
if ( $verbose ) {
WP_CLI::log( sprintf( 'Current Deletes: %d', $deletes ) );
}
sleep( 1 );
}
} else {
// Not live, just output info.
WP_CLI::log( sprintf( 'Will delete revision %d for post %d.', $revision->ID, $post_parent_id ) );
}
} else {
// Revision is after the post has been published.if ( $verbose ) {
WP_CLI::log( sprintf( 'Post-Publish: Will NOT delete Revision %d for post %d.', $revision->ID, $post_parent_id ) );
}
}
} else {
// Post is too new to prune.if ( $verbose ) {
WP_CLI::log( sprintf( 'Too-New: Will NOT delete Revision %d for post %d.', $revision->ID, $post_parent_id ) );
}
}
}
// Pause after lots of db reads.if ( 0 === $count % 5000 ) {
if ( $verbose ) {
WP_CLI::log( sprintf( 'Current Count: %d', $count ) );
}
sleep( 1 );
}
// Free up memory.$this->stop_the_insanity();
// Paginate.if ( count( $revisions ) ) {
$offset += $limit;
$progress->tick( $limit );
} else {
WP_CLI::warning( 'Possible MySQL Error, retrying in 10 seconds!' );
sleep( 10 );
}
} while ( $count < $revision_count );
$progress->finish();
if ( $live ) {
fclose( $handle );
WP_CLI::success( sprintf( 'Deleted %d revisions', $deleted ) );
} else {
WP_CLI::success( sprintf( 'Processed %d revisions', $revision_count ) );
}
}Code language:PHP(php)
This past Monday marked my five year anniversary at Automattic, working for WordPress VIP. It seems forever ago, but at 9:59AM on February 11, 2015 I emailed jobs@automattic.com with my resume built as a child theme to twentyfifteen. Somehow I must have tricked them because 22 days later (which is pretty fast for a reply from Automattic back then π¬), interviewed, code tested, trialed, and ended up sticking around.
I’m not going to go through the boring details of how hiring works at Automattic. Many great friends and bloggers better than me have written about it. I would though like to highlight a few of my favorite parts of my five year journey.
The first few weeks and months were a bit surreal, trying to just get caught up to speed in the chaos. Spring turned into Summer, and I came across my first real moment of panic: Travel.
I was asked to go to New York to meet with some clients, and attend the BigWP meetup. Now, for those of you that don’t know me, I’m born and raised from Nowhere, USA where we don’t really travel much. Luckily, it ended up great! BigWP was a blast, and meeting up with coworkers in real life, traveling around NY, and visiting clients really set in stone that the work that I was doing mattered to people.
https://x.com/crushgear/status/643925830742212608
It wasn’t a month later that I was off again on another adventure: my first Grand Meetup. The Grand Meetup is the once-a-year gathering of everyone from Automattic where we teach, learn, work, play, and really connect. For being a fully distributed company, this face to face time invigorates me more than anything and gets me excited about the work we still have to do. So, in October of 2015 I went to Park City.
At that time VIP felt like a good size, and tiny in comparison to the larger company of 400 Automatticians:
Fast forward a few more months and I’m headed to my first team meetup, in Barcelona. By now, I’ve flown a few times, so I’ve got this down, but overseas, that was another frightening first.
That meetup in Barcelona, my first meetup, will have a special place in my heart forever. I don’t think anything can replace that feeling of fear, excitement, esprit de corps, and so many other emotions. From having coffee at a cafe at nearly midnight with a couple of friends just chatting about life, to actually working together to build our business plans for the year ahead–even thinking back about it now, it seems surreal.
Right. Now this was all within the first six months. At this rate, this post will end up a small book. I’ll just hit some yearly highlights
While I love the work that I do, and I find fulfillment in it, what really sets Automattic and WordPress VIP apart is the people. We come from a wide variety of backgrounds and places, and every single person is extremely talented, kind, and just plain amazing. I wouldn’t want to be anywhere else.
HyperDock, a macOS tool that provides windows previews like modern versions of Windows, seems to have problems on macOS Catalina for me. At random times, it will stop working and I need to go to System Preferences and disable and re-enable it; sometimes many times a day.
I’ve tried contacting the author and so far I’ve not gotten any response. The last update was quite a while ago, and … just radio silence.
To help fix this, I’ve fiddled around enough to figure out what program to run to get HyperDock to work again, and have built a really janky daemon for it using a Login Item.
I have this script that runs every time I log in, which will (re)start the HyperDock Helper, and restart it every time it dies:
#/bin/zsh
until "/Users/username/Library/PreferencePanes/HyperDock.prefpane/Contents/Resources/HyperDock Helper.app/Contents/MacOS/HyperDock Helper"; doecho"HyperDock Helper crashed with exit code $?. Respawning.." >&2 | tee /Users/username/.hh.log
sleep 1
doneCode language:PHP(php)