Ruby Central is very excited today to announce that Samuel Giddins has joined as the organization’s first open source employee as a Security Engineer in Residence. This residency is made possible thanks to support from Amazon Web Services (AWS).

Software supply chain security has become increasingly important for companies over the past few years, due to attackers of all sizes, up to nation-state actors, exploiting supply chain vulnerabilities to breach critical secure systems. Projects like RubyGems and play a crucial role in providing a secure ecosystem for millions of open source Ruby users around the world. 

“The hiring of a full-time W2 employee working on open source software marks a major milestone in the growth of Ruby Central and our community. We are finding sustainable ways to write software for the community, and that’s very exciting for all of us,” said Adarsh Pandit, Ruby Central’s Executive Director.

RubyGems is a package management framework for Ruby. is the Ruby community’s gem hosting service. Historically, RubyGems was staffed by volunteers, then occasionally paid contract contributors, and now regularly paid contract contributors through funding from other entities. This hire marks the first time any individual will be working full-time on RubyGems, and a new era of maturity for the Ruby package ecosystem.

“I’ve been working on RubyGems for almost a decade,” said Giddins. “I’m excited to be able to focus my full attention on combining a focus on security and user experience to help make the Ruby packaging ecosystem the most secure and easiest to use software ecosystem. Full-time work will enable me to both dig into substantial projects that are hard to tackle with scattered time, as well as develop a holistic approach to modernizing the RubyGems ecosystem’s security posture and gain community buy-in and adoption for this work.”

Giddins will spend the next year focused on improving the security posture of the Ruby packaging ecosystem, with a broad focus on making the most secure option the easiest option. His responsibilities will include evangelizing and improving accessibility of security features forRubyGems users, and pushing the RubyGems ecosystem towards adopting industry standard security frameworks (such as Sigstore, SLSA, in-toto, OIDC, webauthn, and many more acronyms).

Much of the work that Giddins will be doing will be in the open, in the RubyGems GitHub organization & the Bundler slack. He will be posting updates to, so you can follow along there or sign up here to receive occasional updates by email.

This role wouldn’t be possible without a generous grant from AWS, the world’s most comprehensive and broadly adopted cloud.

If you want to inquire about sponsorship opportunities, please contact Please direct media inquiries to

Advent of Prism: Part 6 - Control-flow writes

This blog series is about how the prism Ruby parser works. If you’re new to the series, I recommend starting from the beginning. This post is about control-flow writes.

Our system runs scheduled background services once a week, which, although not affecting user-facing functionalities, took approximately 5–6 hours to complete.

The side effect of these operations was a notable strain on our system’s reporting capabilities.

Our admin team, responsible for generating multiple reports, experienced significant delays.

These reports usually take between 5–10 seconds per record. However, on the days   when Our system runs scheduled background services. these  report requests frequently timed out after 30 seconds.

A temporary solution involved caching report fields. This reduced the frequency of timeouts as the first few attempts, allowing fields to be cached in stages.

This approach, while reducing timeouts, also caused confusion among admins due to occasional report timeouts, raising questions about system bugs or limitations.

Optimizing report queries wasn’t a viable option. As The numerous legacy reports were producing accurate results and were used solely by the internal admin team once a week.

Investing time in optimization could be time-consuming, and we aim to avoid altering the already accurate report logic

The Solution:

Implementing a Replica DB

To address this challenge, we decided to integrate a Replica DB. This setup would ensure synchronized operations between two databases:

  • The first database is specifically configured for handling reading queries.
  • The second database is exclusively focused on managing writing operations

For guidance on this implementation, we referenced the following articles:

Given that we were operating on Rails 6, we couldn’t leverage model-level DB replica setups. As a result, we had to configure the entire application for dual database operations.

Configuration Steps:

1. Update database.yml:

We revised this file to distinguish between the primary and read replica configurations. Since migrations in our replica db and syncing of data with primary DB were managed by Heroku, we set database_tasks: false.

# config/database.yml

    <<: *default
    url: <%= ENV['DATABASE_URL'] %>
    <<: *default
    url: <%= ENV['DATABASE_REPLICA_URL'] %>
    database_tasks: false
    replica: true

2. Setting up a Follower DB in Heroku:

We utilized Heroku’s follower DB functionality, designating it as our replica DB.

3. Automatic Connection Switching:

In config/application.rb, we activated automatic connection switching as follows:

# config/application.rb

config.active_record.database_selector = { delay: 2.seconds }
config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver
config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session

4. Adjusting the ApplicationRecord:

Given our Modular Monolith Architecture, as described in The Modular Monolith: Rails Architecture, we opted for an initializer rather than updating the ApplicationRecord files in each engine:

# config/initializers/database_connection.rb

ActiveRecord::Base.connects_to database: { writing: :primary, reading: :primary_replica }

5. Challenges & Adjustments:

Post-implementation, our tests in the testing environment flagged multiple ActiveRecord::ReadOnlyError Write query attempted while in readonly mode: errors. These arose from instances using the GET method but attempting record updates. To address this:

  • We either transitioned these code blocks to POST/PUT requests
  • Or encapsulated them within a writer role:
ActiveRecord::Base.connected_to(role: :writing) do
  # code using the writer role

Additionally, our use of the Devise gem presented complications. Devise occasionally performs writes inside GET requests. To counteract this, we added the following to our all Devise user models:

# models/user.rb

def update_tracked_fields!(request)
  User.connected_to(role: :writing) do

def update_tracked_fields(request)
  User.connected_to(role: :writing) do

def remember_me!
  User.connected_to(role: :writing) do

def forget_me!
  User.connected_to(role: :writing) do


Our implementation of the dual database setup was a success. By redistributing the workload, we ensured that reports were generated promptly on the first attempt. We are currently in the process of analyzing and visually representing the performance boost this implementation has provided.

When writing tests in Rails, you should avoid repetition and have the right amount of tests to satisfy your use case.

This article will introduce you to shoulda-matchers with RSpec for testing functionality in Rails. At the end of the post, you should feel confident about using shoulda-matchers in your Rails application.

Let's get going!

Getting Started

Go ahead and clone the repository of this starter Rails app.

The starter-code branch has the following gems installed and set up:

Shoulda Matchers for Ruby on Rails

According to the shoulda-matchers documentation:

Shoulda Matchers provides RSpec- and Minitest-compatible one-liners to test common Rails functionality that, if written by hand, would be much longer, more complex, and error-prone.

Let's see how shoulda-matchers will look before installing and using them. Our repository has an Author and Book model. We'll add name validation to the Author model without shoulda-matchers.

RSpec.describe Author, type: :model do
  describe "validations" do
    it "is invalid with invalid attributes" do
      expect(build(:author, name: '')).to_not be_valid

In the above, we build an author record without a name, and we expect it to be invalid. If we validate the name's presence in our Author model, this spec should pass.

Note: While we’ll cover shoulda-matchers with RSpec in this post, you can use other frameworks like Minitest instead.

Installation of shoulda-matchers Gem for Ruby on Rails

Add the shoulda-matchers gem to the test group in your Gemfile. It should look like this:

group :test do
  gem 'shoulda-matchers', '~> 5.0'

Then run bundle install to install the gem. Next, place the code snippet below at the bottom of the spec/rails_helper.rb file.

Shoulda::Matchers.configure do |config|
  config.integrate do |with|
    with.test_framework :rspec
    with.library :rails

Here we specify the test framework and library we’ll be using.

Now we'll dive into our Active Model spec.

Active Model Spec in Rails

Your Active Model spec might consist entirely of validations similar to the spec above, which shoulda-matchers handles for you. You’ll want to test validating the presence or length of certain attributes. For example, in the sample app we have above, it’s important to validate the name presence for the author model.

describe "validations" do
  it { should validate_presence_of(:name) }
  it { should validate_length_of(:name).is_at_least(2)}
  it { should validate_length_of(:name).is_at_most(50)}

Here, we validate the presence and length of name. You can see that these validations are one-liners compared to the initial spec we created when we didn’t use shoulda-matchers. The opposite of presence is absence, so we can validate that an attribute is absent like this:

it { should validate_absence_of(:name) }

Here's another validation spec:

it { should validate_numericality_of(:publication_year).is_greater_than_or_equal_to(1800) }

In the above, we test whether publication_year is a numerical value and if it’s greater than or equal to 1800. We can modify the comparison to look like this:

it { should validate_comparison_of(:publication_year).greater_than(1800) }

This assumes we intend to make use of validate_comparison_of.

You can also test for validate_exclusion_of (and its opposite, validate_inclusion_of) like this:

it { should validate_exclusion_of(:username).in_array(['admin', 'superadmin']) }
it { should validate_inclusion_of(:country).in_array(['Nigeria', 'Ghana']) }

Let's say you need to validate a password confirmation:

it { should validate_confirmation_of(:password) }

You’ll want to validate that an attribute has been accepted where necessary. This comes in handy when dealing with terms_of_service, for example:

it { should validate_acceptance_of(:terms_of_service) }

Next up, let's turn our attention to the Active Record spec.

Active Record Spec in Rails

In some cases, you’ll want to validate an attribute's uniqueness. This one-liner handles that:

it {  should validate_uniqueness_of(:title) }

You can take it a bit further using scope:

it {  should validate_uniqueness_of(:title).scoped_to(:author_id) }

This will check that you have a uniqueness validation for the title attribute, but scoped to author_id.

We can also test the relationship between authors and books. Let's say an author is supposed to have many books.

describe "association" do
  it { should have_many(:books)}

This spec will pass if we have the relationship specified in the author model. Then, for the book model, we can have a belongs_to spec:

it { should belong_to(:author) }

There are also one-line specs for other associations you might want to test:

it { should have_one(:delivery_address) }
it { should have_one_attached(:avatar) }
it { should have_many_attached(:pictures) }
it { should have_and_belong_to_many(:publishers) }
it { should have_rich_text(:description) }

If you want, you can test that there are specific columns in your database:

it { should have_db_column(:title) }

You can take it further to test for the column type:

it { should have_db_column(:title).of_type(:string) }

There is also the option of testing for an index:

it { should have_db_index(:name) }

Even if you have a composite index:

it { should have_db_index([:author_id, :title]) }

You can use implicit_order_column in Rails v6+ to define the custom column for implicit ordering:

self.implicit_order_column = "updated_at"

Here, we specify that we want the updated_at column to handle ordering. So when we run Book.first, Rails will use the updated_at column instead of the id. By default, Rails uses the id to order records.

shoulda-matchers has a one-liner test for this:

it { should have_implicit_order_column(:updated_at) }

If we have an enum for our model (like enum status: [:published, :unpublished]), we can write this test:

it { should define_enum_for(:status) }

We can specify the test values:

it { should define_enum_for(:status).with_values([:published, :unpublished]) }

If you have a read-only attribute, you can also test for that:

it { should have_readonly_attribute(:genre) }

And you can test for accepts_nested_attributes_for:

it { should accept_nested_attributes_for(:publishers) }
it { should accept_nested_attributes_for(:publishers).allow_destroy(true) }
it { should accept_nested_attributes_for(:publishers).update_only(true) }

The above tests depend on the use case defined in your model. You can check the Rails API Documentation if you’re unsure how accept_nested_attributes_for works.

There are also options for testing that your records are serialized when you use the serialize macro:

it { should serialize(:books) }
it { should serialize(:books).as(BooksSerializer) }

Here, we test that books is serialized. We specify the exact serializer that we expect to use with as.

Finally, let's turn to the Action Controller spec.

Action Controller Spec in Rails

Moving on to params, let's use config.filter_parameters to filter parameters that we don’t want to show in our logs:

RSpec.describe ApplicationController, type: :controller do
  it { should filter_param(:password) }

You can see from the above that this spec is for the ApplicationController. For params that will be used in other controllers when creating a record (like the BooksController), we can have a spec that looks like this:

RSpec.describe BooksController, type: :controller do
  it do
    params = {
      book: {
        title: 'Tipping Point',
        description: 'Tipping Point',
        author: 1,
        publication_year: 2001
    should permit(:title, :description, :author, :publication_year).
      for(:create, params: params).

This will test that the right parameters are permitted for the BooksController action. The params hash we create matches part of the request to the controller. The test checks that title, description, author, and publication_year are permitted parameters for book.

What if the action needs a query parameter to work?

RSpec.describe BooksController, type: :controller do
  before do
    create(:book, id: 1)

  it do
    params = {
      id: 1,
      book: {
        title: 'Tipping Point',
        description: 'Tipping Point',
        author: 1,
        publication_year: 2001
    should permit(:title, :description, :author, :publication_year).
      for(:update, params: params).

In the above, we use the before block to create a new book record with the id as 1. Then we include the id in the params hash.

If you have a controller action that simply redirects to another path, you can have a spec that looks like this:

describe 'GET #show' do
  before { get :show }

  it { should redirect_to(books_path) }

This checks that we are redirected to the books_path when the request gets to the show action.

We can modify the above spec to also test for its response:

describe 'GET #show' do
  before { get :show }

  it { should redirect_to(books_path) }
  it { should respond_with(301) }

We’ve modified it to test for the status code. If we’re not sure of the exact status code but we have a range of numbers, we can use the following:

describe 'GET #show' do
  before { get :show }

  it { should redirect_to(books_path) }
  it { should respond_with(301..308) }

We can use a rescue_from matcher to rescue from certain errors, like ActiveRecord::RecordInvalid:

it { should rescue_from(ActiveRecord::RecordInvalid).with(:handle_invalid) }

This assumes we have (or will have) a method called handle_invalid that will handle the error.

There are matchers for callbacks we tend to use in our controllers:

it { should use_before_action(:set_user) }
it { should_not use_before_action(:set_admin) }

it { should use_around_action(:wrap_in_transaction) }
it { should_not use_around_action(:wrap_in_transaction) }

it { should use_after_action(:send_admin_email) }
it { should_not use_after_action(:send_user_email) }

You can test if the session has been set or not.

it { should set_session }
it { should_not set_session }

You’ll want to use should_not set_session in your destroy action.

Finally, here's how you can write a spec for your routes:

it { should route(:get, '/books').to(action: :index) }
it { should route(:get, '/books/1').to(action: :show, id: 1) }

And that's it!

Wrapping Up

In this article, we’ve seen what a spec that does not use shoulda-matchers looks like. We then explored how to use shoulda-matchers for your Rails project. It simplifies specs — instead of a spec spanning multiple lines, shoulda-matchers span just one line.

While it’s helpful to use shoulda-matchers, you should know that they cannot replace every spec you’ll need to write (mostly just specs to do with business logic).

Happy coding!

P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, subscribe to our Ruby Magic newsletter and never miss a single post!

One day, I needed to sanitize an HTML input and went to check out how the Loofah gem solved a problem I was having.

Out of curiosity, I asked ChatGPT to provide an example of how to do it.

The answer it gave me was an exact copy and paste from Loofah. Except, there was no attribution. I only knew it was a copy because I had looked at Loofah’s implementation first.

This was not cool. I shared this with my colleagues and we decided to start writing about the recent Large Language Models (LLMs) popularization and the ethical implications of these so-called “revolutionary” Artificial Intelligence (AI) tools.

There’s a lot of discussion about the “fair use” of the training data. And I’m not the only one questioning that argument. I wanted to do something about it.

My amazing colleague, Mike Burns, accepted my invitation for a series of blog posts that dive deep into this topic. We decided to start from the beginning.

Where to begin to understand more about AI and LLMs?

In this first post, Mike lays the ground for the next chapters. The focus is on the technological implementation of LLMs, and not on companies commercializing them (stay tuned for the next post).

Most of the talk around AI has nothing to do with AI, and it’s mostly focused on driving accountability and blames away from people and towards “algorithms”.

Generative AI and LLMs

I, (Mike) don’t know what generative AI is, so let’s talk about Large Language Models (LLMs).

A Large Language Model is a set of data structures and algorithms. You can think of it like linked lists and binary search: a set of tools in the programmer’s toolbox, with specific use cases and drawbacks.

Some venture capitalists and visionary thinkers have made them into something they’re not. Often, they’ve made them into a scapegoat for their own goals, a prerequisite for allowing them to do what they want.

Let’s talk about that.

LLMs are not coming for your job

An LLM is not managing a company; that is done by people in the company. So if someone is saying that “LLMs are coming for your job”, what they mean is that a person is going to fire you and then have software do your job. That person is making that decision.

Algorithms using LLMs have a specific use case. They generate

  • words
  • pixels
  • bytes

They do not produce correct words, they produce likely words. Think of your phone’s predictive keyboard feature, but better.

The tasks that you do at work likely cannot be replaced with such an algorithm. So when a manager fires someone and then uses software to do their job, that’s a decision that the manager is making at the expense of the customer.

And that’s a choice. Of a human.

(Stefanni here)

Since we’re talking about the impact of these tools, as with any other “revolutionary” tool, we can never know what will happen. Or if it’s indeed a revolutionary tool after all. I agree with Cal Newport’s video on “The AI Revolution: How To Get Ahead While Others Panic” view on this.

Could it be that companies realize that chatGPT doesn’t do extraordinary things that a regular employee could do? Or that hiring a person to do the job is cheaper than paying for API requests?

I guess we will find out soon.

Back to Mike.

LLMs don’t plagiarize artwork

OpenAI, one of the big names commercializing LLMs, trained their LLM on the Web, circa September 2021. GitHub, another company investing heavily in LLMs, trained theirs on the repositories that they host.

DeviantArt, Meta, Microsoft, Stability, Midjourney, Grammarly, StackOverflow, and others, have done similar practices: training their LLMs on works created by others, without their consent.

This was a choice that all of them made. This is not a requirement for training an LLM.

LLMS are a tool

All of this is to point out that the algorithms and data structures comprising an LLMs are a tool, not products, and people need to be careful with the tools that they use.

Of course, this is not new. LLMs did not usher in the idea of software harming people.

At every point in the history of computers, we have had to wrestle with the ethics of the products we are making.

No aspect of the data structures available has made that easier or harder. It is still our the choice of whether to make a product for good and it is our burden to consider the systemic effects of the products we build.

Your programming tools have biases

Whether it’s the strongly-opinionated Rails stack or the opinionatedly-unopinionated world of JavaScript libraries, how we model software is shaped by the APIs we are given.

Let’s explore that in the context of a shaped LLM API: OpenAI.

OpenAI is a network API

All of this is honestly a moot point for the vast majority of us:

We are not building LLMS.

Instead, we make API calls to endpoints that claim to be backed by LLMs.

Just as when you make an API call to get the exchange rate for a currency, you need to be clear to yourself and your users about the limitations of the data. There’s nothing new here.

And just as when logging data to your customer support platform, you need to be careful of what data you send over the wire. For us, consultants on our fourth HIPAA training, this is especially not new.

What are the specific, inherent problems of LLMs and GPTs?

Generative Pre-trained Transformers (GPTs), such as those used by OpenAI’s ChatGPT, are awful for the environment. Absolutely abysmal. They are bad in a unique and new way.

Quick note: we will explore this section more in the next post.

Your ethics drive the product, not your tools

The long and short of it is that the existence of LLM algorithms and data structures is not a threat. If you can find a way to make use of a computer that is bad at math and sometimes wrong, that on its own is not a problem.

The existential crisis is in making a product that harms people, which can be done with or without LLMs. It is your responsibility to build your products with care, whether you use insertion sort or machine learning.

What is your responsibility and the correct thing to do?

As with any other tools that surface widespread ethical problems, understanding how they work is critical for us to do something about it.

We hope this post clarified some terms about AI, LLMs, GPTs, etc.

In the next post, we will explore the product-facing, commercial side of companies training LLMs more in-depth, including the impact in our environment.

In the meantime, we have a question for you: do you know any ethically trained LLM, or LLM that respects copyright? If so, we’d love to know.

TL;DR: Set CDPATH with your most important or frequently used parent directories, and you can directly cd into them or their subdirectories no matter where you are in the file system. You don’t have to type the full path anymore.
CDPATH: Easily Navigate Directories in the Terminal

You know about the PATH environment variable. Did you know about CDPATH? If you’ve been typing long cd commands like me, this post is for you.

I came across this gem while reading Daniel Barrett’s Efficient Linux at the Command Line. It instructs the cd command to search for the directory you specify in locations other than your current directory.

A cd search path works like your command search path $PATH. However, instead of finding commands, it finds the subdirectories. You can configure it with the shell variable CDPATH, and it has the same format as the PATH variable: a list of directories separated by colons.

Set this variable in your ~/.zshconfig or ~/.bashrc file to include the directories you visit most frequently.

export CDPATH=$HOME:$HOME/software:$HOME/software/ruby:$HOME/software/rails:$HOME/software/youtube

Now, whenever you want to cd into a directory, the shell will look through all the above places in addition to the current directory.

What’s more, the search is lightning fast. It looks only in parent directories that you specify and nothing else.

For example, let’s say you have a $HOME/software/blog directory and you’ve configured the CDPATH to include the $HOME/software directory.

Now, if you type cd blog from anywhere in the filesystem, the cd command will take you to the $HOME/software/blog directory, unless it finds another blog directory in another pre-configured path.

The order of CDPATH matters. If two directories in $CDPATH have a subdirectory named blog, the earlier parent wins.

In the above example, cd will check the existence of the following directories in order, until it finds one or it fails.

  1. in the current directory
  2. $HOME/software/blog
  3. $HOME/software/ruby/blog
  4. $HOME/software/rails/blog
  5. $HOME/software/youtube/blog

Trust me, this is pretty freaking cool. It has changed the way I use the shell.

P.S. Don't overdo it. If you put too many directories with common names into CDPATH and export it, it might break your scripts.

That's a wrap. I hope you found this article helpful and you learned something new.

As always, if you have any questions or feedback, didn't understand something, or found a mistake, please leave a comment below or send me an email. I reply to all emails I get from developers, and I look forward to hearing from you.

If you'd like to receive future articles directly in your email, please subscribe to my blog. If you're already a subscriber, thank you.

If you have ever used the git command-line utility, you will be pleasantly surprised that if you run git clone --help it automatically displays the man page for git clone instead of the usual --help output.

This blog post will show you how to add the same functionality to your Ruby command-line utility.

Introducing kramdown-man

Man pages are written in the roff typesetting markup language, which uses macro tags that look like .PP and \fB. Needless to say writing roff by hand is not fun. Instead, we will use the kramdown-man gem to generate the roff man page from a similar looking pure markdown man page.

Step 1: Add kramdown-man

Add kramdown-man to your Gemfile and run bundle install:

gem 'kramdown-man', '~> 1.0'

Step 2: Write the markdown man page

First create the man/ directory in your project.

$ mkdir man

Then write the markdown man page, which should be named like man/ The number in the file name indicates the man page Section number (Section 1 is for General Commands, Section 3 is for Library Functions).

Use kramdown-man’s own man page as an example for how to structure your markdown man page. It should look roughly like this:

# mycli 1 "2024-01-01" MyCLI "User Manuals"


mycli - Does things and stuff


`mycli` [*options*] *ARG1* [*ARG2*]


The `mycli` utility does things and stuff. Bla bla bla bla.


: This is a required argument.

: This is an optional argument


`-f`, `--flag` *VALUE*
: This is an option flag that takes a *VALUE* argument.

`-h`, `--help`
: Prints the help information for the command.


Example command description:

    $ mycli --flag file.txt


Your Name <>


[bash(1)](man:bash.1) [other-man-page](

To preview how the markdown man page will be rendered, use the kramdown-man command:

$ kramdown-man man/

Markdown Man Page Layout Explained

The first line will be used for the man page’s header and footer lines. It has the following format:

# mycli 1 "2024-01-01" MyCLI "User Manuals"
  • # mycli - The command name.
  • 1 - The section number.
  • 2024-01-01 - The date the man page is being written, in the format YYYY-MM-DD.
  • MyCLI - The project’s name.
  • "User Manuals" - The man page section name.


Man pages typically have the follow main sections:

  • NAME - The command name and a short definition.
  • SYNOPSIS - The command’s usage, showing order of arguments.
  • DESCRIPTION - A more detailed description of the command.
  • ARGUMENTS - Defines the purpose of each argument.
  • OPTIONS - Defines the purpose of each option flag and it’s usage.
  • EXAMPLES - Show common examples of running the command.
  • AUTHORS - List the authors of the command.
  • SEE ALSO - Link to other man pages.


Argument and option definitions must be defined using markdown definition lists (hence the : before the summary) for them to be properly indented.

: Definition goes here.

: Definition goes here.

  Multiple paragraphs may be given.


Codespans indicate a literal word:


Emphasis and all uppercase indicates a required argument:


Square brackets around an argument indicates an optional argument.


Curly-braces with pipe separates indicates one of the arguments is required:

{*ARG1* \| *ARG2* \| *ARG3*}

To link to other man pages in your project’s man/ directory, use a regular markdown link that links to the file:


This will also generate a bolded man page reference which will look like other-man-page(1) in the displayed man page.

To link to other man pages that are already installed on a system, use a regular markdown link, but use a man:page-name.1 URL with the man page name and section number:


This will generate a bolded man page reference which will look like bash(1) in the displayed man page.

Note: Firefox on Linux will actually recognize man: URIs and open them using Gnome’s yelp help browser.

Step 3: Add the rake task

Now that we have written the markdown man page, we need to setup a rake task to generate the roff formatted man page from the markdown man page.

Add the following code to your Rakefile:

require 'kramdown/man/task'

This will define a man rake task and define file dependencies between the man/*.1 output files and the man/* input files.

Step 4: Generate the man pages

To generate all man pages in the man/ directory run:

$ rake man

You can then view the generated man pages using the man command:

man ./man/mycli.1

Step 5: Add the code

In order for our CLI to automatically display the man page when the --help option is given, we will need to add this bit of code to the OptionParser’s --help option:

# The path to the man/mycli.1 generated man page
MAN_PAGE = File.join(__dir__,'..','..','..','man','mycli.1')


opts.on('-h','--help','Prints this kruft') do
  if $stdout.tty?
    puts opts

The if $stdout.tty? check tests whether stdout is a TTY or being redirected to a file or another command. If we are running in a real TTY terminal, then display the man page. If we are not running in a real TTY terminal, then print the usual --help output. This is a polite thing to do, as users might want to view the --help output through less or might dump it to a file using --help >mycli.txt.

If you don’t want to copy/paste the above code into all of your Ruby projects, you can use the command_kit gem, which provides a CommandKit::Help::Man module that adds the same functionality to a command class.

Step 6: Package your man page

Now that we have generated our roff man pages, we will want to add them to either git or the gemspec’s files list. This way the generated roff man page will be included in the packaged gem.

If you prefer to not add the generated roff man page to git, you can manually add it to the list of files in your .gemspec file:

gem.files << 'man/mycli.1'

Then build and install your gem:

$ rake gem
$ gem install ./pkg/mycli-0.1.0.gem

Step 7: Test It!

Now your command should display it’s own man page when --help is given:

$ mycli --help

You should see something that looks like this:

screenshot of the displayed man page

Advent of Prism: Part 5 - Operator writes

This blog series is about how the prism Ruby parser works. If you’re new to the series, I recommend starting from the beginning. This post is about operator writes.

I’m working on a project with a slightly abnormal Postgres database structure. We’re using a lovely tool called Sequin to sync data into Postgres from Airtable.

In Airtable “linked fields” are represented as arrays of IDs: each ID pointing to a record in another table. That means that our Postgres database doesn’t have standard many-to-many join tables. On a many-to-many relationship (e.g., Vendors <-> Industries) each row in the Vendors table will have an array of Industry IDs, and each row in Industries will have an array of Vendor IDs.

CREATE TABLE "industries" (
    "id" text NOT NULL,
    "name" text,
    "vendors" text[],
    PRIMARY KEY ("id")

CREATE TABLE "vendors" (
    "id" text NOT NULL,
    "name" text,
    "industries" text[],
    PRIMARY KEY ("id")

I’ve found a couple ways to do joins on this schema.

Joining with the ANY array operator

The ANY operator will match any item in the array. You use it in a join like this.

SELECT,, AS industry_name
FROM vendors
JOIN industries ON = ANY (;

This will output a row for each vendor/industry combination.

id name industry_name
recYDX6fBzZebC0ZC Gumbs Partners HR Services
recjPKDyJMfu0Y6Ri Lily of the Valley Floral Design Floral Design
recjPKDyJMfu0Y6Ri Lily of the Valley Floral Design Event Services
recdGYB0t9U7LozYj NXTevent, Inc. Event Services
receoJe6XpK7Sk8TZ We Grow Microgreens, LLC Food & Beverage
receoJe6XpK7Sk8TZ We Grow Microgreens, LLC Event Services
recm9iAPWPJSNhf4S Manifested Events LLC Event Services

Joining with a Postgres view

Most ORMs will struggle to create a join on an array. I’m using Prisma which doesn’t understand that an array of strings can be used as a join to another table. It requires a join table for many-to-many relationships.

We can make a “virtual” join table using a database view, with Postgres’s unnest() function.

Credit goes to the folks at Sequin for suggesting this technique for working with Airtable data.

unnest(array_column) turns an array into rows, where each element of the array gets a row.

Here we’ll use Prisma’s naming format for join tables.

CREATE VIEW _IndustryToVendor AS
  unnest( AS "A"

This is the resulting join table:

rechodXXUzIyLWG1E rec1nNDyfSeshQhVM
reczbFISIU5fu5Fda recihNylMXO81fDgs
recyz3Dn3r8ML9UR3 rec1yOytCf7VM5nZA
recyfeS5rXGahgh7H rec1nNDyfSeshQhVM
reclBbzlO9pyaE1BV recihNylMXO81fDgs
rec4cAq4w5EYg8GId rec1nNDyfSeshQhVM

And you can use it for a join like this:

SELECT,, AS industry_name
FROM vendors
JOIN "_IndustryToVendor"
ON "_IndustryToVendor"."B" =
JOIN industries
ON "_IndustryToVendor"."A" =

This SQL query gives the same result as the ANY query above.

Prisma prefers for its join table to have foreign keys, but it’s not absolutely required. Since views can’t have foreign keys, we’ve done without.


Using the “JOIN ON ANY” technique I can easily write ad-hoc many-to-many join queries.

And using the “view as a join table” technique I can set up many-to-many joins in the Prisma ORM.

A note on performance

I’m not a DB performance expert, but I doubt these techniques will be as performant as a real many-to-many join table with indexes and foreign keys.

An EXPLAIN shows that the first technique (joining on ANY) needs to make a sequential scan of the industries table. I know you can add a GIN index on an array column, but I don’t know if that would improve anything.

According to EXPLAIN the second technique (using a join view) needs to do a sequential scan on the industries table (because it’s referenced in the view), but it’s able to do an index scan on the industries table.

I’m curious if using a materialized view would improve performance.

Regardless, we haven’t encountered issues with slow queries yet, so I’m happy with these solutions.

You can jump directly to a section:

🚀 New Products & 📅 Events

👉 All about Code and Ruby

🧰 Gems, Libraries, and Updates

🤝 Related (but not Ruby-specific)

More content: 📚 🗞 🎧 🎥 ✍🏾 (articles, podcasts, videos, newsletters)

[Sponsor ⬇]

Visit to get stared for free
Are big launches stressing you out? Then you need feature flags. 
Flipper Cloud helps your team deploy the code now and then rollout features when you are good and ready. 

Get started for free at

🚀 New Products

🚀 Ruby On Rails launched The official Rails job board is live

Source: @rails (read on nitter)

🚀 (re-launch) Yaroslav Shmarov shared Ruby on Rails #59 Hotwire Turbo Streams CRUD

Source: @yarotheslav (read on nitter)

🚀 Akshay Khot announced their new course Crash Course On Turbo (Hotwire) Framework

🚀 Sam Johnson launched DevsCoach | End-to-end Rails Stripe Integration

Source: @samcraigjohnson (read on nitter)

🚀 (pre-launch) Nicolas Alpi announced they are working on a course:

Source: @nicalpi (read on nitter)

📅 Events

📅 Tropical.Rb announced the CFP is now open at - Tropical.rb | The Rails Latam Conference

Source: @tropical_rb (read on nitter)

📅 Euruko announced the date for 2024 edition:

Source: @euruko (read on nitter)

📅 Ruby On Rails announced the date for 2024 edition:

Source: @rails (read on nitter)

📅 Rug B announced the RUG::B - December Meetup 2023

Source: @rug_b (read on nitter)

📅 Geneva Ruby Brigade announced the next edition Everyday Performance Rules for Ruby on Rails Developers (Alexis Bernard), Tue, Dec 5, 2023, 7:00 PM

Source: @genevarb (read on nitter)

👉 All about Code and Ruby

[Sponsor 👇]

Fewer 💥, more 😎. Need to restrict who can enable, disable, or roll back feature flags in a particular environment? We can help.
Know who did what (and when), roll back a change in a single click or lock down production access to a few people today at

👉 Joel Drapper shared a code sample using the Literal gem:

Here is one reply from Joel where he explains how this works, but you should read the entire conversation happening as reply to this code sample:

👉 Jorge Manrubia shared a demo of Page refreshes with morphing in Turbo 8

Source: @jorgemanru (read on nitter)

👉 John Nunemaker shared a preview of a new feature for FlipperCloud that will support client side stats:

Source: @jnunemaker (read on nitter)

👉 Colleen Schnettler shared a thread about how they created a grid of radio boxes:

Source: @leenyburger (read on nitter)
Source: @leenyburger (read on nitter)

Konnor Rogers shared a solution built only with CSS  → see it at CodePen

👉  Jorge Manrubia shared that Basecamp runs 18% faster with YJIT along with an article written by Jacopo Beschi about Basecamp code runs 18% faster with YJIT

Source: @jorgemanru (read on nitter)

👉 Greg Molnar shared a tip about silencing health checks in Rails:

Source: @GregMolnar (read on nitter)

🐬 Using the open source version of Flipper to flip features? Switch to Cloud in a few minutes for support, audit history, finer-grained permissions, multi-environment sync, and all your projects in one place.

Start with our free tier today at

👉 Shopify Engineering shared stats about the BFCM:

Source: @ShopifyEng (read on nitter)

There are some replies to this thread shared on Hacker News and Reddit if you want to read and see what people are thinking about this.

👉 Ryan Bates asked about what developers are using to run multiple versions of PostgreSQL for Ruby on Rails development:

Here are some suggestions:

👉Petrik De Heus shared a thread with results from TechEmpower Web Framework Performance Comparison about Ruby web frameworks:

👉 Advent of code with Ruby - here are some of the links shared by people doing Advent of code with Ruby:

👉 Naofumi Kagami 加々美直史 shared Phlex — fast, object-oriented view framework for Ruby

Source: @naofumi (read on nitter)

👉 Joel Drapper shared a link to an mp3 file in the zeitwerk repo that shows how to pronounce zeitwerk:

Source: @joeldrapper (read on nitter)

👉 Daveyon Mayne shared a code sample that helps to truncate about search results:

Source: @sylarruby (read on nitter)

👉 Jorge Manrubia shared an example of using Turbo 8:

Source: @jorgemanru (read on nitter)
Source: @jorgemanru (read on nitter)
Source: @jorgemanru (read on nitter)

👉 David Heinemeier Hansson shared a piece of code from Once:

Source: @dhh (read on nitter)

👉 Nate Berkopec shared about Rails being a good fit for web applications:

👉 Amanda Brooke Perino shared that all communities have an similar problem:

Source: @AmandaBPerino (read on nitter)

👉 Rob Lacey shared a code sample about getting a specific digit from an Integer:

Source: @braindeaf (read on nitter)

👉Paul Reece shared they added a new feature to IRB the -s flag:

👉 Matt Swanson asked about VScode extension to highlight inline ERB:

Source: @_swanson (read on nitter)

Vladimir Dementyev shared  a possible solution:

Source: @palkan_tula (read on nitter)

👉 Axel Kee shared about Ruby:

Source: @soulchildpls (read on nitter)

👉 Mike Ray Arriaga shared a stimulus controller that automatically closes flash messages:

Source: @mike_ray_ux (read on nitter)

👉 John Nunemaker shared that you can define serialization for your objects to be used in ActiveJob:

Source: @jnunemaker (read on nitter)

👉 John Mc Dowall shared that with Ruby beautiful DSLs can be created:

Source: @jmddotfm (read on nitter)

🧰 Gems, Libraries, Tools and Updates

🆕 🧰 John Mc Dowall announced a new gem called consist - The stone age one person framework server scaffolder 

Source: @jmddotfm (read on nitter)

🆕 🧰 Noel Rappin announced a new gem GitHub - noelrappin/gemfile_sorter: A Ruby Gem that sorts Gemfiles. Mostly

🧰 Postmodern announced the release of  Release 0.9.3 · postmodern/ruby-install

🧰 Stan Lo shared  Release v0.1.7 · st0012/ruby-lsp-rspec and Release 0.4.21 - vscode-ruby-lsp

🧰 A new release of Rage - Fast web framework compatible with Rails.

🧰 Brad Gessler announced an update for One-liner URL transforms in Ruby updated to include a block format

Source: @bradgessler (read on nitter)

🧰 Tim Morgan announced adding Threads to Natalie lang Threads by seven1m · Pull Request #1489 · natalie-lang/natalie

Source: @timmrgn (read on nitter)

🧰 Niklas Häusele announced they added support for text-to-speech to ruby-openai Using the OpenAI Text-to-speech API with Rails and he also shared their repo:

Source: @ModernRails (read on nitter)

🧰 Bozhidar Batsov announced Release RuboCop 1.58 · rubocop/rubocop

🧰 Jeremy Evans shared Sequel 5.75.0 Released · jeremyevans sequel · Discussion #2104

🧰 Tomoya Ishida shared a release of version 0.5.0.pre1 of reline:

Source: @tompng (read on nitter)

🧰 Vipul A M announced their PR that improves json gem is merged Perf. improvements to Hash#to_json in pure implementation generator

🧰 Yuichiro Kaneko announced a new version for Release v0.5.11 · ruby/lrama

Source: @spikeolaf (read on nitter)

🧰 Gary Tou announced a new PR for IRB - Implement history command

🧰 Stan Lo announced a new version Release v1.10.0 · ruby/irb

Source: @_st0012 (read on nitter)

🤝 Related (but not Ruby-specific)

🤝 Mike Perham shared about making blogs readable:

🤝 Peter Cooper shared how experienced people sometimes forget to take into consideration the context of newbies:

Source: @cooperx86 (read on nitter)

🤝 Justine Tunney announced llamafile - the faststest executable file format and shared an article about Mozilla-Ocho/llamafile: Distribute and run LLMs with a single file

Source: @JustineTunney (read on nitter)

🤝 Adrian Oprea shared about teaching fundamentals:

Source: @oprearocks (read on nitter)

🤝 Craig Kerstiens shared psql config recommendations:

Source: @craigkerstiens (read on nitter)

🤝 Ryan Bates shared about how UX and performance can be used to mitigate each other:

Source: @rbates (read on nitter)

🤝 Dani Grant shared a cold email that help them get their first job:

Source: @thedanigrant (read on nitter)

🤝 Adam Wathan shared a little UI polish tip:

Source: @adamwathan (read on nitter)

More content: 📚 🗞 🎧 🎥 ✍🏾

🗞 Newsletters

🗞️ Hotwire Weekly published a new edition about Week 48 - Turbo devtools, Turbo without Rails, LAST stack tutorial series, and more!

🗞️ Joe Masilotti published a new edition about Hotwire dev newsletter - November edition

🗞️ Andy Croll published a new edition of One Ruby Thing about Find Definitions Of Rails Methods Using Source Location And Bundle Open

🗞️ Ruby Weekly published a new edition about 60 million requests per minute

🗞️ Awesome Ruby Newsletter published a new edition about Issue 393 - The official Rails job board is live

🎧 Podcasts

🎧 Lucas Barret published a new podcast about GemRuby Show: Dmitry Tsepelev, StoreModel | GemRuby Show

🎧 Rooftop Ruby published a new episode about  Live at RubyConf 2023! — Rooftop Ruby Podcast

🎧 Matt Swanson published a new podcast about YAGNI | Redis w/ Nate Berkopec

🎧 Indie Rails published a new podcast about IndieRails | What to Look For in a New Client

🎧 Rubber Duck Dev Show published a new episode about Working As A Team In Software Development

🎧 Ruby Rogues published a new podcast about Enhancing Ruby On Rails With Hotwire: Turbo, Stimulus, And Strata For Efficiency Ruby

🎧 Remote Ruby published a new podcast about Unlocking The Power Of State Machines In Code Development With Elise Schaefer

🎧 The Ruby on Rails Podcast published a new episode about Episode 497: Rachel Moser On The Odin Project

🎧 The Bike Shed published a new episode about The Bike Shed: 408: Work Device Management

📽️ 🎥 Videos


🎥 Hanami Mastery released a new episode about  Font awesome icons in Hanami apps!

🎥 Drifting Ruby published a new video about Episode 430 - Rails Organization

🎥 Thoughtbot published a new video about Importing posts from an RSS feed with Eleventy

🎥 Simon Willison published a new video about Snakes and Rubies (full) (2005)

✍🏾 Articles

What’s new 🆕

Brad Gessler published a new article about Turbo 8 in 8 minutes

RubyCentral published a new article about November 2023 Newsletter

Jacopo Beschi published an article about Basecamp Code Runs 18% Faster With Yjit

Marc Busqué published an article about Open Source Status: November 2023 Dry Operation Failure Hooks & Database Transactions

Prasanth Chaduvula published an article about Rails 7.1 Introduces Default Dockerfiles

Michel Sánchez Montells published an article about Exploring The Power Of Keyword Arguments In Ruby

Prasanth Chaduvula published an article about Rails 7.1 Adds Active Job#Perform All Later To Enqueue Multiple Jobs At Once

Alexis Bernard published a new article about Helvetic Ruby - RorVsWild

Ahmed Nadar published a new article about RapidRails UI for Ruby on Rails with TailwindCSS and ViewComponent

Hugo Vast published an article about Pimp My Code : Come And Clean Code

Hasumi Hitoshi published a new article しまもん | はすみきん | Reline::Faceで快適ターミナル生活 → read the EN version via Google Translate

I published an article about Review Rails Code: Rubymine AI & Chat GPT

Deep Dives 🔍

Victor Shepelev published an article about “Useless Ruby Sugar”: Endless (One Line) Methods

Ben Sheldon published an article about  The Rails Executor: increasingly everywhere

Jesus Castello published a new article about From Complexity to Clarity: Mastering Ruby’s Pattern Matching Features

Vladimir Dementyev published a new article about TestProf III: guided and automated Ruby test profiling

Sid Krishnan published an article about The Anatomy Of A Turbo Stream

Akshay Khot published an article about Understanding The Rails Router: Why, What, And How

How-TOs 📝

Radan Skoric published an article about Using Turbo Frames And Streams Without Rails

Pulkit Goyal published an article about Keep Your Ruby Code Maintainable With Money Rails

Brooke Kuhlmann published an article about Interactive Ruby (IRB)

AbstractBrain published an article about Using Rails Helpers (X Component) For Rendering View Components

Kasra Rismanchi published an article about Rails Harmony: Debugging Your Dockerized App With VScode And RDBG

Guillermo Aguirre published an article about How To Avoid Distributed Data Consistency Coming Off The Rails

Steve Polito published an article about Are Your Polymorphic Relationships Correctly Enforced?

Niklas Häusele published an article about Using The Open Ai Text To Speech Api With Rails

JetThoughts published an article about Custom Ordering Without Custom Sql With Ruby On Rails 7

Amy Lai published an article about The Weirdest Bug I’ve Seen Yet

Francois Buys published an article about Test Doubles: Testing At The Boundaries Of Your Ruby Application

Tobias Pfeiffer published an article about Reexamining Fizz Buzz Step By Step & Allowing For More Varied Rules


Tobias Pfeiffer published a new article about Interviewing Tips: Technical Challenges – Coding & more

Aaron Francis published an article about Targeting Only Inline Code Elements With Tailwind Typography

Please consider becoming a paid subscriber to support this newsletter for just $1.8/week ($7.5/month), and you will receive an ad-free version. Your contribution aids growth and maintains the quality of ShortRuby for everybody:

Support the newsletter for 1.8$/week

If you consider upgrading and want more information, please read Why to subscribe to paid.

Advent of Prism: Part 4 - Writes

This blog series is about how the prism Ruby parser works. If you’re new to the series, I recommend starting from the beginning. This post is about writes.

Given it’s a dynamic language, it’s important that Ruby comes with several excellent debugging and introspection features out of the box.

Finding the exact place a method or block of code is defined, and being able to read the related source code, is essential for effective debugging and code comprehension. In Ruby, the source_location method provides a powerful tool for retrieving the file and line number for where a particular method or block is defined.

Explore Rails using…

…the #source_location method.

#=> ["/.../activesupport-7.0.8/lib/active_support/core_ext/string/inflections.rb", 60]

Then open from the command line:

bundle open activesupport

…which opens the source code of the gem in your editor of choice.


Use of source_location is invaluable when you’re new to an application, re-exploring unfamiliar code, or trying to understand which gem is providing the functionality you’re using.

Reading source code is a great way to learn. Reading battle-tested code like that of Rails itself, or other gems, even more so.

Thanks to the authors of the framework, source_location also works for “magic” meta-programmed methods in Rails. Methods on Active Record associations that are generated using class_eval pass special syntax to enable the lookup to still work. If it weren’t for this, when you called source_location on these methods, you’d always just see (eval) as the first result.

class Car < ApplicationRecord
  has_many :seats

  # ...

#=> [".../activerecord-7.0.8/lib/active_record/associations/builder/association.rb", 103]

Then open from the command line:

bundle open activerecord

Why not?

While you’re experimenting, you might find source_location doesn’t always provide a helpful result. Methods included as part of “core” Ruby are often implemented in C, and their definitions are not directly accessible from Ruby code. Therefore, calling source_location on a core method will typically return nil.

#=> nil

It also won’t work for methods that use a C extension (where Ruby code calls out to C). source_location only works for methods defined in gems where the source code is in Ruby.

And while source_location can be invaluable during development and debugging, don’t accidentally include it in production code!

The book Crossing the Chasm uses a metaphor of traversing a deep chasm to illustrate the perils of moving from early product-market fit (PMF) to mass-market growth. I’d like to propose a related metaphor for getting to initial PMF: Crossing the Rapids.

The earliest stage of work on a new product is a lot like crossing a raging river.

The goal is to reach product-market fit on the opposite shore, but stepping off of the near shore into your initial go-to-market is risky. Each step your company makes – from slippery, unstable rock to slippery, unstable rock – could be your last.

Many founders find this early-stage dynamic deeply frustrating. Our minds are set on the big opportunity that the opposite shore represents – the new world made possible by our initial set of features and a steadily growing userbase. That big opportunity is why we’re doing all of this in the first place! Because our sights are set on the horizon, the time and effort spent staring at our feet deciding where to step next feels tedious and infuriating. Our gut tells us it’s a time-wasting distraction in the face of our far more inspiring medium- and long-term vision.

And yet, as the metaphor suggests, those early moves are critical and fundamental. Each step could sink the project, or strand us in the middle of the river with no path forward. Our path across the river also determines where on that opposite shore we’re going to land, which in turn determines where we head from there.

As an example, imagine that you’ve got an idea for a dog-walking app that you’ve been thinking about for years. You’ve saved up some money and think it might be time to quit your job in order to work on it full time. Leaving your day job represents that big, risky step off of the near shore. It’s critical to locate that first rock you’re going to step off to before making the leap. This rock could represent an early adopter community – perhaps suburban dog walkers whose unique needs haven’t been addressed by the market yet, which has been focused solely on urban dog walkers.

But which suburban dog walkers? Is the initial focus going to be on a specific geographic area? Freelancers vs. dog-walking services? This decision will determine whose pain points you focus on and therefore what you end up building first, which will in turn determine which customers you attract next. You want to pick an initial rock that will be the most immediately stable but which will also set you up for an easy step to the next rock. And those rocks need to line up directionally towards the opposite shore, not laterally up or down the river.

If your product idea requires deeper technical innovation, that first step might not be an early adopter market, but instead could represent the fundraising and team-building you need to do in order to move forward. Do you pursue federal grants or angel investors? These paths require very different activities, take different amounts of time, and set you on different medium-term paths. It’s important to understand those differences and weigh them against your own personal runway – how much time can you spend without funding or revenue before you need to start drawing a paycheck of some kind?

I like the metaphor of crossing river rapids because it’s one nearly everyone has some experience with (even if just from watching movies!), and because it gives an accurate sense of the way this stage of work should feel: You want to move quickly and thoughtfully. Pivoting is to be expected since it’s rare that nature will have lined up a perfect line of equidistant rocks straight across the river for you.

Most importantly, the river metaphor helps frame how to think about that opposite shore. If you keep your eyes solely on the opposite shore instead of the rocks at your feet, you’re almost guaranteed to make a misstep onto a slippery rock, an unstable rock, or straight into the icy rapids. Luckily, the opposite shore is a large target – impossible to miss! So don’t worry so much about taking your eyes off of it for a few hours, days, or even an entire week. ;)

The Range class of Ruby allows developers to work with sequences of values conveniently.

Ranges in Ruby can be created using the ..(inclusive) and ...(exclusive) notations, where .. includes the end value, and ... excludes it.

One commonly used method associated with ranges is size, which returns the number of elements in the range, including both the start and end points.

However, a bug related to Rational endpoints in the size method has been identified and addressed in the upcoming Ruby 3.3 release.


A bug in versions prior to 3.3 causes incorrect behavior when the endpoint is a Rational number. Let’s illustrate this with examples:

product_price_range = (10...15.5r)
product_price_range.each { |price| p price }

Output => 10, 11, 12, 13, 14, 15

puts product_price_range.size

Output => 5 (Incorrect)

In the above example, we create a range from 10 to 15.5(exclusive) with a Rational endpoint. The each method correctly iterates over the range, printing 10, 11, 12, 13, 14 and 15.

However, when we query the size using size method, the expected result of 6 is not returned; instead, an incorrect size of 5 is reported.


In Ruby 3.3, the bug related to Rational endpoints in the size method has been addressed.

The fix ensures that the size method correctly accounts for the endpoint, providing an accurate count of elements within the range. Let’s revisit the example:

product_price_range = (10...15.5r)
product_price_range.each { |price| p price }

Output => 10, 11, 12, 13, 14, 15

puts product_price_range.size

Output => 6 (Correct!)

With the bugfix in Ruby 3.3, the size method now returns the expected result of 6, including the Rational endpoint in the count.

To know more about this bugfix, please refer to this PR.

Advent of Prism: Part 3 - Reads

This blog series is about how the prism Ruby parser works. If you’re new to the series, I recommend starting from the beginning. This post is about reads.

I spent the month of November writing 50,000 words for National Novel Writing Month. This makes me a NaNoWriMo “winner” and I get bragging rights for a whole year that I wrote a novel.

I’ve written quite a few books already, but all of them have been tech books. You could argue that at least one of them, Maintainable Rails, is a work of fiction based on its title alone… but that’s a long bow to draw and very subjective.

National Novel Writing Month (NaNoWriMo for short) encourages budding novel authors to write a piece of fiction that’s 50,000 words long over an entire month. Traditionally the month to achieve this in is November. This works out to be 1,667 words a day or 3 full A4 pages of text, every single day, for 30 days straight.

I tried doing this last year and got up to 20,000 words and then bailed at the end of the 2nd week when I couldn’t work out where to take my characters next. I spent the whole year since then stewing on my “failure”.

This year, I intentionally kept my scope narrow. A small cast of characters and a tight location.

The premise: The protagonist is forced to return to the office of a large tech company, and discovers that the company has undergone a hostile takeover. The company starts encouraging a religious devotion and cult-like fervour for work. Colleagues who express the most devoutness for the company start getting promoted, and end up disappearing, with their disappearance explained away by upper management. The protagonist investigates their disappearance and discovers that things aren’t what they appear to be. They discover that the takeover was done by hostile entities from another reality who use the lives of the employees to fuel their conquest of this reality.

I chose this setting as a return to the office is a “nightmare situation” for me. (I exaggerate quite a lot here.) I live 250km+ from the nearest capital city, and commuting into an office would mean a 4 hour commute away, and that’s just one way. I’m sure if there was a “return to the office” mandate from where I work now, they would understand that the logistics of doing so are quite difficult!

For the book, I drew this “nightmare situation” far past its reasonable conclusion, and attempted to write something that skewered the almost cult-like devotion that large tech companies implicitly require from their employees.

Turns out, this was fertile ground as I was able to pull 50,028 words out based on the premise.

I spent October writing notes and ideas for the book into a single note file on my phone. Whenever I came up with an idea, no matter how silly, I wrote it down. This ended up being about 400 words itself.

Then when November 1 came around, I opened up Pages and the notes side by side and started writing based off the ideas. I started writing in a linear fashion, but after a few days I moved on from that and started writing whatever came to mind. I would think of a different scene, or even a different interpretation of an existing scene, and write the scene again, taking it in another direction.

This may seem counter-intuitive to writing a novel. But the choice I made was that this novel probably won’t ever see the light of day, at least in this incarnation, and so it didn’t matter if things weren’t a perfect line from start to end. So I sat down and wrote whatever I felt like, with an absolute insistence to myself and my family that I would hit the word target of 1,667 words each and every day for November.

And I managed to do that every day, bar one absolutely bonkers incredibly busy Tuesday in Week 3. The next day was brutal, and I ended up writing 3,500 with two writing sessions, one in the morning and one at night. After the night session, I went immediately to bed and slept the sleep of the dead. Thursday AM I wrote the daily quota in the morning, and Thursday PM went to bed at 8. Pushed super duper hard that week and certainly felt it!

In terms of things that helped: No Plot? No Problem! written by the guy who started NaNoWriMo, Chris Baty, helped set expectations for what to expect each week. The hyped exuberance of Week One, followed by the Pit of Despair and wanting to destroy everything of Week Two. Fucking hell, that was a rough week.

The other big thing: The overwhelming urge to let your Inner Editor rampage through your work all the time. I tried to keep him in his kennel, but he did escape from time to time.

The book was packed full of helpful advice from Chris and other NaNoWriMo winners with a touch of whimsy thrown in, I would recommend this guide for anyone else attempting this project too.

I wrote most mornings from 6amish to 7amish, while my daughter played on her iPad next to me on the desk. I occasionally wrote in the afternoons during a lunch break too. If I hadn’t finished writing by the night, I’d finish writing after my daughter went to bed. I managed to fit the writing in around my work and life schedule, without it interfering too much… although there were some times the dishes weren’t done or a gym session got missed.

Sometimes I wrote on my phone at the park while Ella played on the swings, or at swimming lessons while she was there too. Writing on the phone is quite slow compared to the bigger keyboard (about 30wpm vs 120wpm), but it meant that I could spend more time thinking about plot directions and what characters’ motivations were.

Now that the writing project is over, I’m going to let it sit for a while. I might revisit it, or I might not. I’m still feeling quite satisfied that this year I was able to write a “full novel”. Perhaps next year I could set the goal of publishing one? Either way, you can be sure I’ll be bragging about this all year.

Temporary databases for development

At RailsEventStore we have quite an extensive test suite to ensure that it runs smoothly on all supported database engines. That includes PostgreSQL, MySQL and Sqlite in several versions — not only newest but also the oldest-supported releases.

Setting up this many one-time databases and versions is now a mostly solved problem on CI, where each test run gets its own isolated environment. In development, at least on MacOS things are a bit more ambiguous.

Let’s scope this problem a bit — you need to run a test suite for the database adapter on PostgreSQL 11 as well as PostgreSQL 15. There are several options.

  1. With brew that’s a lot of gymnastics. First getting both versions installed at desired major versions. Then perhaps linking to switch currently chosen version, starting database service in the background, ensuring header files are in path to compile pg gem and so on. In the end you also have to babysit any accumulated database data.

  2. An obvious solution seems to be introducing docker, right? Having many separate Dockerfile files describing database services in desired versions. Or just one starting many databases at different external ports from one Dockerfile. Any database state being discarded on container exit is a plus too. That already brings much needed convenience over plain brew. The only drawback is probably the performance — not great, not terrible.

What if I told you there’s a third option? And that database engines on UNIX-like systems already have that built-in?

The UNIX way

Before revealing the solution let’s briefly present the ingredients:

  1. Temporary files and directories — with convenience of mktemp utility to generate unique and non-conflicting paths on disk. If these are created on /tmp partitions there’s an additional benefit of operating system performing the cleanup periodically for us.

  2. UNIX socket — an inter-process data exchange mechanism, where the address is on the file system. With TCP sockets one would address it by host:port, where the communication goes through IP stack and routing. Instead here we “connect” to the path on disk. The access is controlled by disk permissions too. An example of such address is /tmp/tmp.iML7fAcubU.

  3. Operating system process — our smallest unit of isolation. Such processes are identified by PID numbers. Knowing such identifier lets us control the process after we send it into the background.

Knowing all this, here’s the raw solution:

TMP=$(mktemp -d)

initdb -D $DB
pg_ctl -D $DB \
  -l $TMP/logfile \
  -o "--unix_socket_directories='$SOCKET'" \
  -o "--listen_addresses=''\'''\'" \

createdb -h $SOCKET rails_event_store
export DATABASE_URL="postgresql:///rails_event_store?host=$SOCKET"

First we create a temporary base directory with mktemp -d. What we get from it is some random and unique path, i.e. /tmp/tmp.iML7fAcubU. This is the base directory under which we’ll host UNIX socket, database storage files and logs that database process produces when running in the background.

Next the database storage has to be seeded with initdb at the designated directory. Then a postgres process is started via pg_ctl in the background. It is just enough to configure with command line switches. These tell, in order — where the logs should live, that we communicate with other process via UNIX socket at given path and that no TCP socket is needed. Thus there will be no conflict of different processes competing for the same host:port pair.

Once our isolated database engine unit is running, it would be useful to prepare application environment. Creating the database with createdb PostgreSQL CLI which understands UNIX sockets too. Finally letting the application know where its database is by exporting DATABSE_URL environment variable. The URL completely describing a particular instance of database engine in chosen version may look like this — postgresql:///rails_event_store?host=/tmp/tmp.iML7fAcubU.

Once we’re done with testing it is time to nuke our temporary database. Killing the process in the background first. Then removing temporary directory root it operated in.

pg_ctl -D $DB stop
rm -rf $TMP

And that’s mostly it.

Little automation goes a long way

It would be such a nice thing to have a shell function that spawns a temporary database engine in the background, leaving us in the shell with DATABASE_URL already set and cleaning up automatically when we exit.

The only missing ingredient is an exit hook for the shell. One can be implemented with trap and stack-like behaviour built on top of it, as in modernish:

pushtrap () {
  test "$traps" || trap 'set +eu; eval $traps' 0;
  traps="$*; $traps"

The automation in its full shape:

with_postgres_15() {
    pushtrap() {
      test "$traps" || trap 'set +eu; eval $traps' 0;
      traps="$*; $traps"

    TMP=$(mktemp -d)

    /path_to_pg_15/initdb -D $DB
    /path_to_pg_15/pg_ctl -D $DB \
      -l $TMP/logfile \
      -o "--unix_socket_directories='$SOCKET'" \
      -o "--listen_addresses=''\'''\'" \

    /path_to_pg_15/createdb -h $SOCKET rails_event_store
    export DATABASE_URL="postgresql:///rails_event_store?host=$SOCKET"

    pushtrap "/path_to_pg_15/pg_ctl -D $DB stop; rm -rf $TMP" EXIT


Whenever I need to be dropped into a shell with Postgres 15 running, executing with_postgres_15 fulfills it.

The nix dessert

One may argue that using Docker is familiar and temporary databases is a solved problem there. I agree with that sentiment at large.

However I’ve found my peace with nix long time ago. Thanks to numerous contributions and initiatives using nix on MacOS is nowadays as simple as brew.

With nix manager and nix-shell utility, I’m currently spawning the databases with one command. That is:

nix-shell ~/Code/rails_event_store/support/nix/postgres_15.nix

As an added bonus to previous script, this will fetch PostgreSQL binaries from nix repository when they’re not already on my system in given version. All the convenience of Docker without any of its drawbacks in a tailor-made use case.

with import <nixpkgs> {};

mkShell {
  buildInputs = [ postgresql_14 ];

  shellHook = ''
    ${builtins.readFile ./}

    TMP=$(mktemp -d)

    initdb -D $DB
    pg_ctl -D $DB \
      -l $TMP/logfile \
      -o "--unix_socket_directories='$SOCKET'" \
      -o "--listen_addresses=''\'''\'" \

    createdb -h $SOCKET rails_event_store
    export DATABASE_URL="postgresql:///rails_event_store?host=$SOCKET"

    pushtrap "pg_ctl -D $DB stop; rm -rf $TMP" EXIT

In RailsEventStore we’ve prepared such expressions for numerous PostgreSQL, MySQL and Redis versions. They’re already useful in development and we’ll eventually take advantage of them on our CI.

Happy experimenting!

This year I decided to try my hand at the Advent of Cyber challenges.

The Day 2 challenge involves Data Science. We are given a Jupyter Notebook file containing a table of log data showing ports scan events. Now the challenge teaches you how to use Jupyter Notebook and Python, but we’re not going to solve it using Python. We are going to solve it using only Ruby!

While Python is very popular in the Data Science field, you can do Data Science with Ruby. Ruby standard library comes with many useful methods, such as map, select, group_by, group_by, which allow you to slice and dice large datasets.

First, we will need to liberate the data from the Jupyter Notebook. To do this, we open the Jupyter Notebook, navigate to the Table View, select all rows, copy the rows, and paste into a text file.

PacketNumber	Timestamp	Source	Destination	Protocol
1	05:49.5	HTTP
2	05:50.3	TCP
3	06:10.3	HTTP
4	06:10.4	ICMP

The rows will paste as Tab Separated Values (TSV). We will need to convert the rows into Comma Separated Values (CSV). Converting from TSV to CSV is as simple as the following vim substitution command %s/\v\t/,/g.


Much better. Finally, we save the file to data.csv.

Next, we will spawn an Interactive Ruby session using irb with the csv library preloaded:

$ irb -r csv

Now we will load our data.csv file into a variable:

csv ='data.csv', headers: true)

Now we just have to answer the Day 2 questions using pure Ruby.

How many packets were captured (looking at the PacketNumber)?


What IP address sent the most amount of traffic during the packet capture?

csv.group_by { |row| row['Source'] }.max_by { |ip,events| events.count }.first

What was the most frequent protocol?

csv.group_by { |row| row['Source'] }.max_by { |ip,events| events.count }.first

As you can see, you don’t necessarily have to use Python for Data Science. Ruby is more than capable of doing basic Data Science.

Advent of Prism: Part 2 - Data structures

This blog series is about how the prism Ruby parser works. If you’re new to the series, I recommend starting from the beginning. This post is data structures.

Adarsh Pandit, Executive Director

Adarsh joined us as Executive Director starting in May, 2023. He is a long time Ruby Central participant and volunteer.

He is the founder and managing partner of the Ruby design studio, Cylinder Digital, where he led a multi-year collaboration with Code for America.

Previously, Adarsh was Managing Director for the San Francisco office of the acclaimed Ruby studio, thoughtbot.

Hello! Welcome to the November newsletter. Read on for announcements from Ruby Central and a report of the OSS work we’ve done from the previous month. In October, Ruby Central's open source work was supported by 35 different companies, including Fastly, Sentry, Ruby member Zendesk and Ruby Shield sponsor Shopify. In total, we were supported by 182 members. Thanks to all of our members for making everything that we do possible. <3

Ruby Central News

RubyConf 2023:

  • This year's RubyConf was a success! Thank you all so much for joining us, bringing all your joy and good energy, and helping to make it such a wonderful time.
  • The playbacks of the talks will be posted on our youtube channel in a few weeks, so stay tuned. In the meantime, we'd love to hear your thoughts about the event: what you liked and what we can do to make next year's event even better. Please fill out this survey and share your feedback, it means a lot to us.
  • You can also relive some of this year's RubyConf vibes via the Ruby on Rails Software Sessions and Rooftop Ruby podcasts, who recorded live episodes on-site. Thank you for helping share the amazing things our Ruby community is doing, with the world. 
  • Last but not least, thank you to our RubyConf 2023 sponsors! We couldn't have made this all happen without you.

Upcoming Conferences:

Get Involved:

  • A lot of you at RubyConf told us you'd like to get involved and help make our community and events even better. We're so excited to hear this! Check out our leadership page, and/or feel free to shoot an email to our executive director, Adarsh, to find the best way to get plugged in.
  • Want to share your brand at RailsConf or RubyConf in 2024? Secure your sponsorship now to reach over 500 attendees, showcase your thought leadership, and cultivate invaluable industry relationships by emailing our wonderful sponsorships manager, Tom.
  • Remember, you can receive exclusive benefits like conference discounts and more by signing up for a Ruby Central membership.

New newsletter format!

  • We’ve made a small update to our format. (You may have noticed the slight change to the title already). Our newsletter each month will now include a wider range of updates that may span more than just the previous month. You’ll see this most reflected in our OSS report in the following section. 
  • This will allow us to bring you ALL of the most up to date news from both Ruby Central’s OSS and operational teams. It will hopefully also improve the quality of the release notes we bring to you, in terms of timing and usefulness. We hope this helps!

RubyGems News

During October in RubyGems, we released RubyGems 3.4.21 and Bundler 2.4.21.

A couple of noteworthy updates this month include the introduction of a feature to abort setup.rb for outdated Ruby versions (#7011), and efficiency enhancements enabled by removing Dir.chdir from subprocess execution (#6930). We also achieved a major configuration improvement by implementing a pure-ruby YAML parser (#6615). The documentation also saw significant improvements, with updates to the bindir variable (#7028) and fixes to invalid links (#7008).

Some other improvements that landed into our repo this month that are NOT included in the above releases are:

  • an enhanced continuous integration (CI) by incorporating the latest patch level releases of Ruby, ensuring more robust testing environments (#7036).
  • updates to the SPDX license list to reflect the latest standards as of October 5, 2023. This ensures compliance and accuracy in licensing (#7040).
  • improved formatting and presentation of global source information on the bundle plugin manual page, contributing to better usability and readability (#7045).
  • significant optimization by reusing the Gem::RemoteFetcher instance within Bundler (#7079).
  • modified, more relaxed, pattern matching for Rake versions, allowing for greater flexibility and compatibility in different environments (#7123).
  • refinements to the recent fix related to force_ruby_platform (#7115).
  • enabled automatic switching to user-level gem installations when GEM_HOME is unset and the default gem home is not writable (#5327).

In October, RubyGems gained 160 new commits contributed by 22 authors. There were 3,940 additions and 1,149 deletions across 197 files. News

The updates to in October reflect a strong commitment to improving user experience, enhancing security, and modernizing the platform. Here's a brief overview of the key improvements in the release:

  • implementing a fix for the subscription links on the RubyGems dashboard (#4111).
  • creating a proof-of-concept for integrating Tailwind CSS, aiming to modernize and enhance the frontend design and responsiveness of RubyGems (#4113).
  • resolving ambiguity in ownership uniqueness errors, specifically addressing scenarios where a user is already invited or is an owner (#4119).
  • addressing a critical issue where users who had pushed gems with associated API keys faced difficulties with account deletion. This fix ensures smoother user account management and security (#4130).
  • fixing timestamp fields options feature, refining user interface elements and data accuracy (#4132).

In October, gained 60 new commits contributed by 12 authors. There were 4,532 additions and 2,184 deletions across 181 files.

Total spent

In October, we completed 436 hours of development work and spent $65,431.60.

Thank you

Thank you to all the contributors of RubyGems and for this month! Your contributions are greatly appreciated, and we are grateful for your support.

Contributors to RubyGems:

Contributors to

This is a part of a blog post series about “useless” (or: controversial) syntax elements that emerged in recent Ruby version. The goal of the series is not to defend (or criticize) the features, but to share a “thought framework” for analysis of their reasons, design, and effect the new syntax has on a code that uses it. See also intro/ToC post.

Today’s post covers the feature that was one of the most divisive in the community (sometimes even more so than numbered block parameters): one-line method definitions.


The usual Ruby method is defined like this:

def my_method(args)

Since Ruby 3.0, this alternative syntax is also allowed for methods consisting of exactly one statement:

def my_method(args) = body


Ruby is one of the rare mainstream languages that doesn’t have C-like {} as its base code blocks wrapping punctuation. It also doesn’t use significant whitespaces to designate where the code block ends, unlike Haskell or Python. (Almost) every construct ends with end, like in Pascal or Lua.

if condition
  # ...body...

items.each do |item|
  # ...body...

class C
  # ...

  def m
    # ...

This is mostly OK to type, and a modern IDE might do that for you, but when the body and the header of the code block are tiny, the syntax might feel bulky (“feel bulky” is very imprecise here, but we’ll get to more concrete reasons soon).

Most of the code constructs have more compact versions, though—and not just mechanically compact, but expressing small and simple things a bit differently:

return [] if denied? { |item| process(item) }

# Produce a body-less class, just to designate a new type of exception
MyError =

…but not methods! There is only one way to write them.

One could’ve forced methods to fit into one line using ;:

def my_method(args); body; end

However, the views of the Ruby community developed in the way that using ; is deemed bad taste: it is a sign that you are cramming too much—several logical phrases—into one line1.

Many languages were forced to invent shortcuts for one-expression functions when function iteration became mainstream, going, in JS’s case, from function(arg) { return val } to arg => val. But Ruby already had code blocks for that, so no evolution for methods syntax was necessary2.

“But why would a small syntax non-optimality matter?”—one might ask. (And, depending on the mood, mention code golf as a main association for the “whether it can be put in one line” question.)

Throughout this series, I talk a lot about the comfort of a reader and the perception of the code as a continuous narrative. In this context, “how much of it fits in one page” matters. This doesn’t mean that cramming everything into tight subsequent paragraphs, like a serious book, is a good idea: code isn’t supposed to be primarily read paragraph-by-paragraph.

On the other hand, a two-words-per-line, twenty-words-per-page nursery rhyme-like layout means that one might need to scroll through dozens of pages to get “what’s this all about.”

I imagine a good code layout somewhat like an entertainment printed magazine: reasonably short articles, a lot of breathing space, pull quotes, lists, and schemas to emphasize and draw attention to various parts, removing small details to footnotes, and so on. (Of course, our layouting tools are different, but the effects to achieve are frequently the same.)

But the only pre-Ruby 3.0 syntax for methods turned many of them into nursery-rhyme style text:

# A small value object encapsulating a word
# in text-processing algorithm:
class Word
  attr_reader :text

  def initialize(text)
    @text = text

  def inspect
    "#<#{self.class} #{text}>"

  def ==(other)
    other.is_a?(Word) && text == other.text

  def <=>(other)
    text <=> other.text if other.is_a?(Word)

  def punctuation?

  def capitalized?
  # ...and so on, I just started!

It might be a “value object” fully consisting of such small methods, like shown above3, or a few small methods in a larger object (#inspect, trivial predicates, #to_h, this kind of stuff), the problem stays the same: a page or several pages of context might be easily eaten by code with saying “Hello, my name is Jane”-level things.

An unspoken consequence of this situation is that people start to avoid “adding unnecessary (but useful!) stuff” like convenience methods or whole convenience objects because it was “just that small thing” in your head and two pages of code in reality.

So… Can those small helper methods become shorter?


The solution was born as an April’s Fool joke.

There is a long-standing tradition of proposing absurd features once a year (here are some selected by a tag, but I think there were many more before). The proposal is frequently supplemented with a dead-serious justification; dedicated jokers frequently provide a patch to the language proving the change is possible. A good-natured discussion arises, with other tracker participants alternating between those who haven’t noticed the date or felt the absurdity and those who support the joke by discussing syntax details or submitting equally absurd counter-proposal.

Yusuke Endoh’s proposal of 2020, though, was stated in an emphatically unserious tone:

Ruby syntax is full of “end”s. I’m paranoid that the ends end Ruby. I hope Ruby is endless.

So, I’d like to propose a new method definition syntax.

def: value(args) = expression

What happened next was somewhat singular. Matz (Ruby’s BDFL) left a comment:

I totally agree with the idea seriously […] but don’t like the syntax.

And so it happened.

A more natural syntax

def value(args) = expression

was initially considered impossible, but stellar @nobu (Nobuyoshi Nakada) implemented it in one night. (Apparently, this still required a lot of careful juggling with the parser: some limitations the new syntax brought were resolved only in the next version, and some nasty quirks are still remaining and discussed below.)

So, there are times when a lighthearted joke might produce an important change to the language (and give it a goofy name: “endless method” is still its semi-official moniker, frequently used on the tracker, though in docs, it is referred to as “shorthand method syntax”).

Irks and quirks

One confusing and unintended problem with one-line methods is related to non-obvious parsing priorities:

class Test
  def initialize(active)
    @active = active

  def invoke = puts "works" if @active
# Trying to use it

Instead of printing "works" at the last line execution, this code will fail with a confusing message “ undefined method `invoke’”. That’s because of the aforementioned confusing parsing order:

# Expected:
def print = (puts "works" if @active)

# Real:
(def print = puts "works") if @active

This is most definitely an unintended behavior, and one that apparently incredibly hard to fix, so the discussion is still ongoing.

As usual with syntax quirks, parentheses help!

# This will work as intended
def print = (puts "works" if @active)

Another example of the parsing problem:

# valid
def initialize(one_value) = @one_value = one_value

# throws syntax error:
def initialize(two, values) = @two, @values = two, values

# because it is parsed as
(def initialize(two, values) = @two), @values = two, values

# the remedy, again, is to put parentheses around:
def initialize(two, values) = (@two, @values = two, values)

I have a small hope that a tectonic process of switching to a new parser, Prism (it is awesome, read about it), might help to fix the case eventually.


One of the things frequently pointed out while criticizing a new syntax is that it makes one-statement methods “special,” in a sense that once you need a second statement, you’ll need to change the shape completely:

# You had this...
def owner_name =

# ..but what if it becomes a tad more complicated?
# We can't just insert a new line of code right above the existing one:
# need to push code around, remove =, etc.
def owner_name
  default = I18n.t('that_thing.default_owner_name')

  @user&.name || default

This property of the syntax is not unusual for Ruby, though. A simple { do_something }, once you need a second statement inside the block, requires splitting into lines (and in many code styles, changing block wrapping syntax for multiline blocks4) and is generally inconvenient.

We can look at it not as an inconvenience, though, but as a suggestivity of the syntax. At the point when your small and elegant one-line method suddenly needs a second line, one might stop (for a brief microseconds, after all, we think and type pretty fast, we just perceive some things as unnecessary obstacles) and consider one of two scenarios: maybe there is a way to keep it a one-liner? For the method above, it could’ve probably been something like:

DEFAULT_OWNER = I18n.t('that_thing.default_owner_name')

def owner_name = @user&.name || DEFAULT_OWNER

…which, depending on the case and the codebase, could represent a cleaner separation of concerns.

There might be another case when two or more statements are what really represents the method’s needs. In this case, those few strokes of “rewrite” are also useful: they allow to update an “internal model” of the method from “one phrase” to “several phrases” (and this might lead to adjusting the name, say).

The “one phrase perception” is a key here—and the main thing that the new syntax added: one-phrase methods that are written as such: just like trailing if. Because the “classic” method, even the smallest one:

def size

…the internal voice would read: “There as a method size. It is calculated as @objects.count. That’s it.”

While the one-line one:

def size = @objects.count

…is read “The method size is @objects.count.”

The character count here is not that important. Heck, even, paradoxically, the line count is not! While the shorthand syntax is frequently dubbed “one-line method syntax,” this is perfectly valid code:

Event = Data.define(:kind, :context, :timestamp)

def event(kind) =
  kind: kind.to_sym,
  context: self,

This still reads as exactly one phrase: “event is a method that produces Event instances.”

On the other hand, in the code that makes good use of such “one-phrase” methods, one might consciously leave the one-expression method multi-phrase, emphasizing its non-triviality (and a general feeling of “here, stop and read”):

def send_event(kind, payload)
  EventQueue.instance.push(, **payload))

So it is not about making all one-expression methods endless mechanically, but about a tool of thought, a tool of communication.

And here I’ll again repeat the convoluted yet expressive example from “Pattern Matching / Taking it further,” where several new syntax features play together, to the effect of “multi-body pattern matching method”, with one-line method syntax bringing final touches to the structure:

def slice(*) = case [*]
in [Integer => index]
in [Range => range]
in [Integer => from, Integer => to]
  p(from:, to:)

slice(1)    # prints {:index=>1}
slice(1..3) # prints {:range=>1..3}
slice(1, 3) # prints {:from=>1, :to=>3}

And while I don’t expect using endless methods for multi-line, yet “logically one phrase” methods to have a large mind share anytime soon, having it as an expressive tool might adjust community outlooks with time.

A weekly postcard from Ukraine

Please stop here for a moment. This is your weekly mid-text reminder that I am a living person from Ukraine, and a bit of useful related information.

One news item. Besides everything that happens on the frontline, a couple of days ago, Russians shelled Seredina-Buda in the Sumy region, killing two adults and seven-years-old girl. To understand that better, I advise you to look at the town on a map (and compare it to a map of active warfare). Russians constantly shell Northern parts of Ukraine to terrorize and in the hope of provoking “an unjustified attack on Russian soil.”

One piece of context. Last Saturday was a Holodomor Victims Remembrance Day. Here is the Ukraine Explainers thread giving an important context of one of the previous genocides Russians attempted upon our country.

One fundraiser. Finland-based Ukrainian game designer Sergey Mohov and his charity Polubotok Treasury have an active fundraiser (with convenient donation options) to help the Ukrainian Armed Forces. Please consider donating!

Please proceed with the rest of the article.

How others do it

It goes without saying that in many functional (or, in our post-modern times, “functional-first”) languages, name = expression is the main way of defining them. (And how do multi-expression functions look in such languages is a separate question.) Haskell:

add x y = x+y

In the context of this article, it is more interesting, though, to look at how the problem is addressed in languages of the closer paradigms.

As was pointed out at the beginning, most mainstream languages nowadays use C-style {} to wrap their code blocks, and so the problem doesn’t manifest itself as pressing: you always can just write header { body } in one line as necessary, and the wrapping punctuation neither prohibits this nor takes too much space; it is also easy to mentally skip while reading as a phrase.

Even so, the special role of one-phrase methods is recognized, say, by C# and Kotlin:

// regular function:
fun double(x: Int): Int {
  return x + x

// single-expression function:
fun double(x: Int): Int = x + x

(For both languages, dropping parentheses and return clause, with a value implicitly returned by a single statement, seems to be a point of those shorthands.)

Another group is languages with significant whitespaces: many of them allow writing a short function body in the same line as the header, like Python:

# regular function:
def is_even(x):
  return x % 2 == 0

# ...can be written this way:
def is_even(x): return x % 2 == 0

…bringing both shortness and “it is just one phrase” effect (though Python’s mandatory return might sometimes feel redundant).

Scala and Nim even use = as a symbol between the header and the body, which makes one-line method definitions look almost exactly like Ruby’s.

Julia, which is, like Ruby, one of a few languages with end keyword, has like Ruby, =-driven shorthand (yet, unlike in Ruby, it creates a value that is possible to pass around—see more about it below):

# regular
function f(x,y)
  x + y
# shorthand:
f(x, y) = x + y

As a counter-example, some newer languages with {} in syntax and a default formatter in the toolbox, not only avoid special one-expression forms (without parentheses) but tend to set the default formatting rules to prohibit writing a short one-expression body with parentheses in one line. Here are corresponding discussions for Go and Rust, the latter stating in no unsure terms (emphasis mine):

Some people like to fit a whole function (decl and body) on one line. They are wrong, but we should support it as an option.

Taking it further

Quite a few Rubyists were displeased with the new syntax specifically due to usage of =, which is “too similar to assigning the value,” and therefore muddies the distinction between values and methods.

It would be fair to say that the strong distinction between “values” and “methods” is a fruit of the strictly imperative upbringing of the developer. Today, it isn’t necessary to take a full course of functional programming to be accustomed to the fact that, yes, you actually can put a function into a variable, and treat it as a data type. After all, the ubiquitous JS has it!

// A definition!
function foo() { return 3 }
// A variable!
foo = function() { return 3 }

(Though I admit, even for users of those languages, it sometimes requires a good mentor or a book to reveal the “function can be a value” idea. I met an experienced and productive Rubyist who felt “weird” to pass around a block of code that is passed to a method as &block, and when they needed a functional value, they just used -> {} lambdas.)

So using = to define a function is not an esoteric choice.

On the other hand, it might feel misleading!

As I explained some time ago, Ruby doesn’t have a natural way to use a method as a value. The shortest you can do is to invoke method(:method_name) and it creates a Method object on the fly, which has both performance and reading penalties.

So, while in, say, C#, when you write an assignment-like definition:

int multiply(int x, int y) => x*y;
// you can do this:
var m = multiply;
// or this:

…so, you really have assigned some value. But in Ruby:

def multyply(x, y) = x * y

# No, this immediately attempts to call multiply, and fails
# due to missing arguments.
m = multiply
# That's the only way
m = method(:multiply)

…and “assignment-like syntax” feels less justified.

So I think: maybe, there is a possible future where everybody so got used to writing def method(...) = that the idea of method values becomes naturally necessary, and there might be one more attempt to bring them to the language.


I fully expect at least some readers to catch on the “April Fools’ joke” theme and use it as proof of the “language awfulness” and “syntax features uselessness” (especially considering my honest covering of the implementation’s shortcomings).

But I didn’t come here to preach Ruby’s superiority (nor to expose its unworthiness).

My theme here is how the language changes in response to our understanding of what and how we want to say and how our understanding is adjusted by the language changes.

For programming languages, the process is as natural, inevitable, and perpetual as for the human ones.

Unlike human languages, many programming languages change not only in response to their factual usage but also in response to their inherent values. What draws me to Ruby is what I feel is its inherent value: phrase-level expressiveness for story-level clarity.

I am not saying that it is not the “only expressive language” (and not even “one of the most expressive ones”), and many of its design decisions with years became questionable. The only thing I am saying is that it is—for me—the language that consciously thinks this way and makes me think, too—and, hopefully, produce not the most mundane texts from those thoughts.

On this note, I have finished covering the last feature in the series I’ve planned. There would be one more post with some general conclusions and probably a bit of bonus content: a list of Ruby syntax elements that I actually don’t like (there are some!) and a list of those that could but haven’t (yet?) materialized.

But now December is upon us, and it means the upcoming Ruby release, which, in turn, means I need to work on this year’s changelog, and it usually takes quite some time. This year, I plan to publish a few diary notes from this work—and then get to other topics, including the “useless sugar” series wrap-up, after the New Year.

You can subscribe to my Substack to not miss it, or follow me on Twitter.

Thank you for reading. Please support Ukraine with your donations and lobbying for military and humanitarian help. Here, you’ll find a comprehensive information source and many links to state and private funds accepting donations.

If you don’t have time to process it all, donating to Come Back Alive foundation is always a good choice.

If you’ve found the post (or some of my previous work) useful, I have a Buy Me A Coffee account now. Till the end of the war, 100% of payments to it (if any) would be spent on my or my brothers’ necessary equipment or sent to one of the funds above.


The syntax is helpful, though, when Ruby is used by its old vocation: as a scripting language to be invoked from a console, write one-time quick scripts, and fast, focused experiments.


Standalone lambdas—which in Ruby aren’t directly related to methods—were following the common trend and changed from lambda { |arg| body } to ->(arg) { body }.


Yes, inheriting from Struct or Data would make some of these methods unnecessary, but that’s not the point.


Ruby has two syntax constructs for block wrappers: do/end and {}. There are two styles of choosing one of them: a part of the community uses {} only for one-line blocks (and switches to do/end for multiline ones), another part, including yours truly, prefers to keep {} for “functional” blocks that return values (like map or filter) and use do/end only for multi-line, imperative iteration.

My client project uses a polymorphic relationship between several models in an effort to create a flexible system of associations.

However, I realized that this system was too flexible because it did not enforce the relationships as expected.

Our base

Here’s the domain we’ll be working with in this tutorial. The important thing to note is that a product has_many :pictures and an employee has_one :picture.

class Employee < ApplicationRecord
  has_one :picture, as: :imageable

class Product < ApplicationRecord
  has_many :pictures, as: :imageable

class Picture < ApplicationRecord
  belongs_to :imageable, polymorphic: true

The problem

I’ve previously written about the limitations of a has_one relationship, and this is no different. As you can see, it’s still possible to associate more than one picture with an employee.

employee = Employee.last
Picture.create(imagable: employee)

Picture.where(imagable: employee).count
# => 2

A naïve solution

In the previous article we solved this by creating a unique index. Since we’re working with a polymorphic relationship, we’ll need to make this index on the imageable columns.

class AddContstraintToPictures < ActiveRecord::Migration[7.1]
  def up
    add_index :pictures, [:imageable_type, :imageable_id],
      unique: true,
      name: "by_employee"

  def down
    remove_index :pictures, name: "by_employee"

Then, we can compliment the unique index by adding a validates_uniqueness_of validation.

--- a/app/models/picture.rb
+++ b/app/models/picture.rb
@@ -1,3 +1,5 @@
 class Picture < ApplicationRecord
   belongs_to :imageable, polymorphic: true
+  validates_uniqueness_of :imageable_type, scope: :imageable_id

However, this approach is too heavy-handed. Although it prevents an employee from having more than one picture, it also prevents a product from having more than one picture.

product = Product.last!

picture =
=> false

=> {:imageable_type=>["has already been taken"]}

An improved solution

What we need is a partial index. This allows us to conditionally enforce the uniqueness constraint. In this case, we want to do this when the imageable_type = "Employee".

--- a/db/migrate/20231123105601_add_contstraint_to_pictures.rb
+++ b/db/migrate/20231123105601_add_contstraint_to_pictures.rb
@@ -2,7 +2,8 @@ class AddContstraintToPictures < ActiveRecord::Migration[7.1]
   def up
     add_index :pictures, [:imageable_type, :imageable_id],
       unique: true,
-      name: "by_employee"
+      name: "by_employee",
+      where: "imageable_type = 'Employee'"

We can also add this conditional to the uniqueness validation by using the conditional option.

--- a/app/models/picture.rb
+++ b/app/models/picture.rb
@@ -1,5 +1,6 @@
 class Picture < ApplicationRecord
   belongs_to :imageable, polymorphic: true

-  validates_uniqueness_of :imageable_type, scope: :imageable_id
+  validates_uniqueness_of :imageable_type, scope: :imageable_id,
+    conditions: -> { where(imageable_type: "Employee") }

Wrapping up

Although this solution enforces our conditional uniqueness constraint in both the database and application, it’s not necessarily the most flexible solution. If you introduce a new model with has_one :picture, as: :imageable, you’ll need to modify the database index.

Instead, you might want to consider just leveraging the validation at the application level, knowing that it’s possible duplicate records could still be added.

Loading ...