December 03, 2024

Fortran Types and Component Selection

Derived Types in Fortran

What and Why

Pretty much all languages have support for user-defined data types in some way or another. These can be as simple as a set of data items (called elements, components, properties, members and probably other things too) collected together under a single name, up to fully functioning classes (objects) which can define operators (like + and -), functions (like printing and even the call operator), and act in every way like a "built-in" data type.

Collections of data, without any concept of what the data can do, are often referred to as "POD types" or plain-old-data types, especially in C++ where this distinguishes them from classes like we just described. Note that C++20 has refined this specification and now uses some new terminology with extra caveats, but POD is a widely used term pre-20. Fortran and C++ both only really make the distinction between such 'trivial' types and ones with methods etc in places where it matters. For instance, struct and class in C++ are almost the same, differing only in whether members are accessible or not by default - this means using a struct for a 'data' type makes it clearer that members are intended be accessed directly, but doesn't really change anything.

In Fortran, the user-defined 'TYPE' construct (also called a derived type) can be a POD type, or can have functions (aka procedures or methods) attached to it. They can also be 'polymorphic' - loosely though of as substitutable based on ability rather than identity. True object oriented programming is therefore possible in Fortran since at least 2003 (opinions vary on what a language needs to support innately, versus what can be "hacked in" to be considered a true OO contender). Fortran types support 'access control' with the 'public' and 'private' keywords, but we wont cover that here.

All of the code snippets, and more are available here.

Defining and Using a Type

There's two elements to using your own types, namely defining one (saying what it looks like) and creating one (making a variable of that type). This is best done by example, so let's use a variation on the usual exemplar and do an 'atom' type.

Defining your type looks like below. 'atom' is going to be the name for the type, and it's going to contain a 'name' and an 'atomic number'. The code is

TYPE atom ! The name for the type
CHARACTER(LEN=30) :: name ! a field called name, of type string
INTEGER :: atomic_number ! an integer field for atomic number
END TYPE atom ! Optional to repeat the name here

and it can go in the same places as a variable declaration, e.g. in a MODULE, or at the top of your main program, as long as it's before any attempt to use it.

Now we have the "idea" of an atom, a thing consisting of a name and an atomic number. How do we use this? Well we can create a variable of the 'atom' type like this:

TYPE(atom) :: myAtom ! A variable of type atom

which we often describe as creating an "instance".

We can also create a function which expects something of type atom. This is powerful for two big reasons and plenty of smaller ones.

  • The first big one: the code now documents itself far more clearly. Rather than taking two, possibly unrelated, data items, we see it takes an atom.
  • The second big one: our function interface and call-lines get shorter and more compact, taking a small number of custom types, rather than a long list of generic ones.
  • A third one (with many caveats in reality): we can extend the idea of an atom without changing all our functions signatures; although often doing this is a sign of problems.

So here's a function which takes two atoms and returns a third. Imagine we're some sort of weird alchemists...

FUNCTION transmute(first_atom, second_atom) RESULT(combined_atom)
TYPE(atom) :: first_atom, second_atom, combined_atom
...

Derived or user defined types are used here exactly like any other data type - specify INTENTs as usual, PARAMETER (remembering that we then need a way to give values - see below).

Type Components

So we can create a type, but that's not a lot of use without being able to get and set its elements. Like in most languages, what we need is an instance, and then we can refer to its contents. We do this like in this code block:

TYPE(atom) :: myAtom
myAtom%name = "Hydrogen"
myAtom%atomic_number = 1
PRINT*, myAtom

NOTE: the printing is pretty dumb - it just outputs the items in order with default format. We can do better, but that is a topic for a potential future post, not now.

Aside - one reason why it's not a dot

By the way, a lot of people really dislike the choice of '%' for component selection. I do not know for sure if what follows is defining reason, or just a part of why it's not so easy to fix, but remember that Fortran comes from a time when input characters might have been restricted and it was not certain that somebody had a '=' key. In C the solution was digraphs and trigraphs- 2 or 3 characters that were parsed as one. Think '<=' but potentially far more esoteric.

In Fortran, the decision was made to use the form ".OP." instead, such as .LT. .LE. etc. This was also used for things like .TRUE. and the logical operators .AND. etc. Fortran is also very generous about white-space, caring only when it is needed to divide identifiers.

Now consider this bit of code, if we could use '.' to get type components:

first.lt.other

Is this:

  • a less-than comparison between a variable called 'first' and one called 'other'
  • a nested type object property access, where we have a variable 'first' with property 'lt' which in turn has property 'other'.

Oh dear... If we can't tell, then we have a problem! Some compilers are willing to risk the ambiguity, but some are not, so do not offer the alternative symbol.

Default Values

We can give each element of our type a default value if we want. We do that inline in the definition, such as

TYPE atom ! The name for the type
CHARACTER(LEN=30) :: name = "" ! a field called name, of type string
INTEGER :: atomic_number = -1 ! an integer field for atomic number
END TYPE atom

NOTE: if you're worried about 'SAVE' behaviour, types don't have any lurking surprises here. All 'SAVE' would mean is that the type's properties retain their values for the life of the variable which is... exactly as we'd hope it would be. No surprises waiting for us here.

Structure Constructors

Not just a tongue-twister, I promise: just another less-known feature of Fortran types. For the atom type above, we can also create an instance with values set at construction, like this:

myAtom = atom("Hydrogen", 1)

or even using keywords, such as

myAtom = atom(name = "Helium", atomic_number=2)

which we think is actually pretty damn informative! As with function calls, keywords can be in any order and we can mix the two styles, as long as all the positional ones come first.

We can also create a temporary atom this way, for example to pass to a function. In many senses, this is a literal just like the string "Hello" or the number 2. If we store it into a variable, then that variable now contains the data. Otherwise, we can't modify it, or access it's individual components*. Mostly we can use it as the source for setting a variable, or to pass to a function.

NOTE: this atom will exist only for the duration of the line where it is created. For a function call this includes the bit inside the function. IF we don't specify intent, we can happily pass this temporary and modify it - but this is probably not what we intended, since it will go away before we can name it. Specifying INTENT(INOUT) or (OUT) will disable passing a temporary, and ensure we can't make this mistake.

*NOTE: once we have passed a temporary to a function, inside the function we have a dummy variable (formal argument) just like normal. If the dummy is "bound" to a temporary, then we can't write to it, as the previous note says. But we can access the components of the dummy regardless of this. So we can do 'myAtom%name' inside a function having passed "atom("Iron", 27), but we can't do atom("Iron", 27)%name. The code for this article, same as linked above, makes this clearer

Well, I Never Knew That: Arrays of types and arrays of components

We often say that arrays are the great strength of Fortran, and that whole array operations, slices etc are a huge part of the reason why. Well, Fortran kindly allows us to retain all of this when we're using types too. For example, the following is perfectly valid code:

TYPE(atom), DIMENSION(10) :: theAtoms
INTEGER, DIMENSION(:), ALLOCATABLE :: theNums
PRINT'(A)', theAtoms%name
theNums = theAtoms%atomic_number

which will PRINT just the names, and then give us an array of just the atomic numbers.

This is a handy trick in any case, but it's also a great thing to know if refactoring code that doesn't use types (or doesn't use them to their capacity yet). We can pretty freely wrap things into types, even if those types are then in arrays, and we can still unwrap the individual components very cheaply. The resulting arrays are "proper" arrays. We can assign them to things, assign to them, pass them to functions, slice them: everything we expect.

An Example Refactoring

Here's a quick example (see the full code here)

Old: ! Takes 3 arrays for components of position
SUBROUTINE old(x, y, z, dt)
REAL, DIMENSION(N) :: x, y, z
REAL :: dt
New:

TYPE pos
REAL :: x, y, z
END TYPE

! Takes one array of types instead
SUBROUTINE new(p, dt)
TYPE(pos), DIMENSION(N) :: p
REAL :: dt

The body of the function can still use idiom like "x = x + 0.1*dt" by just changing to "p%x = p%x + 0.1*dt" even though p is now an array of types. The function signature is clearer, and it is no longer possible to decouple x, y and z from each other at different indices by accident.

Take Away Point

Types are possibly the best way to make your Fortran code better and more modern for extremely little effort. They make function signatures clearer, let the compiler enforce type checking better, and reduce the "mental" load of having a bunch of connected variables which you have to remember the relationships between. Use them in new code, and consider the simple refactoring above with existing code. It wont be wasted effort even if a rewrite is in the future - having the data structures already figured out is a great bonus.


Fortran Refactoring

Fortran into the Future

This blog is about software engineering in academic contexts. We see a lot of "styles"* of code. Improving these, through refactoring (re-writing to have the same function, but an improved form), training academics who write and maintain them, and offering libraries and code snippets that address common function well, is a big part of our role.

*styles is in scare-quotes for a reason. Sometimes style is just an excuse for doing it wrong.

In STEM disciplines (at least the first 3 letters of it), Fortran code is pretty common, and some of it is the opposite of pretty. Modern approaches like modularity of design, well-defined interfaces, use of types and Object Oriented features, are often mysterious and underused.

There's two choices on how to handle these old* (see below) codes.

*old: synonyms 'ancient', 'decrepit', 'mature', 'venerable', 'familiar', 'long-lived'. Take your pick. Sometimes something is old because nobody bothered to renew it, sometimes it is old because it works.

Choice 1) Abandon them. Rewrite them in a cool new language (Rust! Julia!). Get rid of the cruft and the dust and make something new and better.

Choice 2) Refactor them. Take advantage of the experience that has gone into them and make them better slowly but surely.

These posts plan to talk about the second approach. Sometimes small improvements can greatly increase robustness, utility and our quality-of-life when we have to deal with these codes, without introducing new bugs and regressions or re-inventing the wheel.

Refactoring is an idea few seem to bother with, and even those who want to do it struggle to find any time or funding for the work!

So let's discuss some of the nice features of Fortran, with half-an-eye on the less well know bits, especially those which help us to 'patch-in' the new feature cheaply. These kinds of refactors can reward the effort many times over, without the time and risk of a full rewrite.

A List (hopefully a growing one)

Links TBA once the corresponding post is ready

Types and Component Selection - use the type system for clarity and robustness

Common blocks - strategies for getting rid

Explicit Interfaces - a step towards modularity


November 27, 2024

How does MPI parallel code actually run?

One of those things I wondered about for a long time, before just getting on and finding out is this:

when I run an MPI parallel code, what actually happens?

And, before forging ahead to answer my question I should clarify why I want to know - well basically so I can understand what happens in error cases and edge cases, such as starting a program without mpiexec, or mpiexec-ing a non MPI program.

I know that MPI lets me run multiple, interacting, instances of my program, and lets those communicate with each other. But I also know that I can start my program without using mpiexec (or mpirun, srun or any other invocation) and it might work as a serial code - but it doesn't always. I know that MPI_Init is really important, but I don't know what a program can and can't do before that line, or how a completely empty program would behave. I don't understand what an MPI program without any comms in would actually do. I am not certain whether any bits of my program are somehow shared - data or state or communicators.

As usual, I could answer all of these questions individually, but there's a good chance I can answer them all if I can just work out what's missing in my mental model. It turns out this is what I hadn't realised:
 

mpiexec starts N independent copies of my program. When my code reaches the MPI_Init function, communication is established (using some info provided by the launcher) - the copies are made aware of each other and assigned their ranks. MPI_Finalize is where the comms is shut down. *

Obvious in retrospect, but it answers all of my questions.

  • Starting a single instance of my code (a serial version) will work as long as my algorithm _can_ work on a single processor, without deadlocks etc. But starting N independent copies won't make for a parallel run, because the information (or daemons etc) MPI_Init needs won't be present - it wont know about the other copies, or even how many copies there are.
  • Before the MPI_Init line, my code can do anything that doesn't use communication - no MPI calls, no use of communicators etc. That means I can't know how many processors I am running on (we don't know ranks), or if I am the root (proc 0) or anything like that.
  • A completely empty program, or one where I never call MPI_Init, will run N independent copies, but they will never know about each other. Just like the parts of my program before the Init. This also tells me what happens if I mpiexec a completely non-MPI program.
  • A program that calls MPI_Init but has no actual comms can still be a parallel program - if I can split up my work with nothing other than MPI_Comm_size or MPI_Comm_rank, for instance dividing my work into N blocks, I can do that work in parallel (as long as I am careful about outputting the final product of my work blocks).
  • The one thing I can't definitively answer using this is whether I can mpiexec a program that wasn't compiled as an MPI program. But I can guess, based off the fact that a program without MPI_Init can be valid, that I would probably get N independent programs, and I'd be right, as it happens.
  • Finally, I can easily see that no program state can possibly be shared, because my program copies are independent, with their own memory spaces. Things like communicators must contain information sufficient for the message passing "layer" to pass information between copies of my program.


Note that I put a '*' on my statement of what actually happens - this is "correct from the perspective of my program", but a little incomplete in general. The mpiexec launcher can, and generally does, do some elements of setting up comms, but this is lower level than my program and doesn't affect how it behaves or what it can do. I also omitted anything about the compiler step - since I know MPI uses compiler wrapper scripts, something could happen at this stage, which is why I can't completely answer that penultimate question without using some more information.


October 30, 2024

Another trick for nice wrappers

Follow-up to A tricksy bit of C++ templating from Research Software Engineering at Warwick

In the previous post we wrote a lot about using templating in order to write very powerful wrapper classes that could relatively easily expose function from what they are wrapping. This time, we're showing one final piece of that puzzle, which is how to wriggle around the wrapper, without just abandoning type safety.

Suppose we have a class in C++. We are allowed this mad looking construct:

struct myClass{
int member; // Data member
void otherMember(int a){std::cout<<"Called the fn on "<<member<<" with "<<a<<'\n';}
};

int main(){
myClass tmp;

// Pointer to class member
int myClass::*tmpMem = &myClass::member; //<<<<<-----
// Access member of specific instance
tmp.*tmpMem = 10;

// Function pointer to class member function
// Best to use auto, but doing it long-form to demonstrate
void (myClass::*tmpFn)(int) = &myClass::otherMember; //<<<<<-----
// Call the function. Note the brackets
(tmp.*tmpFn)(2); //<<<<<-----
}

Here, we're getting a "pointer-to-member", either a data member, or a function, of a specific class. Notice the difference between the type of tmpMem, which is a pointer to the member element of some instance of myClass, but not any specific instance. To use the pointer, we have to apply to a specific instance. In a lot of senses, you can think of this as containing the instruction to get to a particular member given the start of the object in memory, and for data there's a fair chance it unpacks to something like this behind-the-scenes.

For a function, the same applies. tmpFn is a pointer to the otherMember function of some instance of myClass. We can call the function on a particular instance. Note the brackets - we need to bracket the whole "callable" chunk on the left, and also supply the relevant arguments. If you get errors about something being "not callable", you probably forgot or misplaced these.

Aside: If the class, or the function is templated, then type deduction will work as usual. If you need to specify a type explicitly, you do it about how you'd expect, like:
void (myClassT<double>::*tmpFnT2)(int) = &myClassT<double>::otherMember;
myClassT<double> tmpT;
(tmpT.*tmpFnT2)(2);

So, why would we ever want to do this? Well, two reasons really.

On the one hand, it's about making the type system work for us. A simple pointer to an int, or a function taking an int and returning void, are very general. A class scoped variant is more restrictive, and lets us enforce that only members of a particular class are valid.

On the other hand, this construct lets us do some things that we could otherwise only do by reflection, usually involving wrappers. We showed some pretty horrible templating in our previous post involving wrapping specific named functions. This idea lets us write something to invoke an arbitrary function on a class we're wrapping, while retaining some type safety and access control (i.e not just making it public). Read on to see how.

Implementing a pass-though call

For simplicitly, we're going to show the example for a target function which takes one argument and may or may not return anything. A variadic template, in place of the typename T in the definition of invoke, lets us take any number, of any kind, with some caveats about references. (If we get the chance, we might write about argument forwarding in future).

So suppose the wrapper class we're writing wraps some inner type. Suppose we're free to substitute anything we want as the inner type, but not allowed to modify the wrapper class, or more usually, that we don't want the wrapper to become unmanageable. Further, suppose the wrapper has some reason to exist, like to log all calls, apply some kind of checks etc.

In this case, what we need, is the ability to pass the wrapper a function to call, and have it do it's "wrapper-y" work before passing this on. If this "wrapper-y" work needs any parameters, we can obviously provide them. What we need, is the pointer-to-member idiom, like this:

#include<iostream>
struct ST{
void theFn(int a){std::cout<<a<<std::endl;}
}; ST theVal;
public:
template<
typename fn, typename T>
auto invoke(fn callable, T arg){
return (theVal.*callable)(arg);
}
};

int main(){
wrapper myW;
auto fn = &ST::theFn;
myW.invoke(fn, 10);
}

Now that works, but it's a little bit ugly in "user-space". The problem I was doing this to solve was to let me define a few, specific functions on the ST type, interject some checking via the wrapper, and not have to expand my wrapper to handle any possible function I might name. While it's not perfect, if I just define the function below, I am pretty close to being able to do this seamlessly, without a "user" having to know what's going on. Note I have named these to be explanatory, and wouldn't suggest being so heavy handed in reality:

auto call_theFn(wrapper & theW, int theVal){
theW.invoke(&ST::theFn, theVal);
}

call_theFn(myW, 7)

Compare this to the seamless
myW.theFn(7)

and I think it's alright.

Aside: the variadic version, which is way more useful, is here. Note that if the function takes a value by reference, we have to do something more careful in forwarding our types. That gist should work and preserve reference, const etc.


July 24, 2024

Testing the test suite

Testing is good! Or at least is can be. Tests increase our confidence that code is correct. However, writing bad tests is incredibly easy and assessing the quality of a test suite is a perfect example of the "crowbar inside the box" problem. If we can be sure we wrote the test correctly, why can't we make the same assertion about the code already?

Easy Cases

There are times when the tests truly are "easier", "simpler" or "more nailed down" and in these cases, we might feel safe.

For example, sometimes, the test is much simpler than the code, and sometimes the test really is externally prescribed, and in these cases, writing tests is pretty safe - we're unlikely to make the same errors in test and code.

Sometimes too, we can form extra, trivial-to-write, tests by not getting hung up on true "unit" testing and working with inverse pairs instead. For example we can verify our file writing and reading are the reverse of each other, and give us the correct object back. We can verify that our "plus" and "minus" are the reverse of each other, and that A + B - B == A. Making the test extremely simple means it has very little chance to be wrong, but it also limits how thoroughly we are testing things.

Code Coverage

You may have heard of coverage testing - adding confidence in your test suite by verifying that it actually "exercises" every line of your code. At least, that's what you hope. However, in general all coverage does is say that you ran a line - it can't say whether you verified the outcome.

But the Tests Passed!

To come to the important point: the tests are testing the code, and the code is testing the tests. If both agree, great. What if they don't? Are you really sure that the error is in the code, not the test? And on the flip side, if the tests pass, does that _really_ make you confident in the code?

Mutant Code

Enter "mutation testing" - a way to test our *test suite* using our *code*.

Mutation testing belongs near the end of development, once code is written, tests pass, coverage is good (all for a given function or set of functions, not necessarily the whole code), and you are wondering if there may still be bugs, or you are trying to get something ready for "prime time" and want to be confident it will work when used in ways you haven't been using it.

It's based on a very simple premise - if the code is broken, the test ought to fail. Sounds obvious, but have a good hard look at some tests you've written or worked with. Are you sure that will happen? Are there cases or paths that don't get checked? Are you catching all the edge cases and transition values in your test?

Once again, if just looking over the test code let you guarantee those things, then you could do the same to your code, and not need the tests at all.

Instead, let's break the code and see what happens. Swap operators, add off-by-one errors, remove checks on bad input! Force the tests to prove that they can fail! Then we know that passing means something.

Practical Mutation Testing

Obviously we want something systematic to do this, and that is actually pretty hard, and the topic of active Computer Science research still, in spite of having existed for ever already. But as always, "the perfect is the enemy of the good" or indeed the "good enough". The more holes we can close, the fewer will be left, so even something not great is worth a try.

The idea is as follows:

  • generate systematic "mutations" of the source code
  • check that these compile or run
  • try and systematically eliminate "equivalents" - code that, although different, has the same effect
  • run the test suite against every "mutant"
  • If the tests fail, the mutant has been "killed"
  • If the tests pass, there is a hole - an error they are missing
  • try and fix the holes in the test suite

NOTE: there are a few things this can turn up that do indicate changes to the code are in order, but in general you should primarily be altering the test suite. If that causes tests to fail that you expected to pass, then you need to alter the code. If that shows sections of "dead" code, then you might want to alter the code. Try and keep these two processes separate.

NOTE 2: BE VERY VERY wary about altering your code just to allow it to be tested.

Tools

Recommending a tool is tricky. Github right now has ~300 repos tagged mutation-testing covering a bunch of languages. It also has several repos just listing extant and abandoned tools. The UniversalMutator (https://github.com/agroce/universalmutator) deserves a mention, as a language agnostic and pretty usable option.

If you're not sure if this is worth doing, try flipping a bunch of operators in your code, adding some range errors, deleting a statement etc and running your tests. See if you have gaps to fill.

Brief Aside:

Note for the interested: while that all might sound pretty convincing, there's a lot we are skipping over. For instance, we have to assume that big errors tend to occur due to multiple small errors, and thus squashing the small bugs will squash the big ones too. This is known as the "coupling effect" e.g. https://dl.acm.org/doi/10.1145/75309.75324which also discusses the wonderfully descriptive "Competent Programmer Hypothesis" which posits that we normally only make small mistakes in things.

Conclusions

Some times it is tempting to talk about testing as infallible. This has come up a lot recently with some high profile failures of large products, where many people despair that "they obviously didn't test it". That's just not true, we only know they didn't test for the specific failure.

Actually implementing "quality testing" is about a lot more than just adding a few test cases, and in important software, library software etc, it is good to be aware of just how much moreis needed to be truly confident in code quality.

If you take one point away from this, let it be this one: A test that cannot fail is no better than no test at all.


March 20, 2024

It's sort of pass by reference, ish

When teaching Fortran, it's often tempting to describe it as a "pass by reference" language. This is almost true. Unfortunately as the phrase has it, "almost only counts in horseshoes and handgrenades" and "almost true" in programming terms isn't usually good enough. All too often, the sort of complicated technical thing that should only matter to an expert trips up plenty of beginners who, like the Inexperienced Swordsman "doesn't do the thing he ought to do"

So what's the truth? The truth is that most of the time things work just like pass-by-reference because they are expected to look just as if they were. But actually, they are allowed to do a "copy-in copy-back" process, as long as the final result has the expected changes. Copies can be expensive, so compilers don't tend to actually do this without a good reason.

The place this usually comes to our notice is when we use slices of arrays. For example, we can take every second element of an array in Fortran, very easily. Or we can take half of a 2-dimensional array (in the row direction). Both of these are valid "array slices" but both consist of data that is not contiguous in memory. Items do not follow each other one after the other in memory.

But in Fortran, we can pass these to some function that expects just "an array". Now imagine what would have to happen to do a true "pass by reference" here. The compiler would have to pass the array, and then enough information to "slice out" only the values we wanted. This would be pretty weird, and if we passed that value on to another function also expecting an array, it could easily get out of control. So instead, the compiler will tend to generate the code to copy the values in the slice into a temporary array, use them there, and copy them back when we are done. To us, everything will work seamlessly.

That is, as long as we do what we're supposed to, everything works. But if we start breaking rules, we can get some very odd behaviour from this. Compilers are like puppies - they trust us to keep our promises. In fact, they rely on this so much that they simply assume that we will! When we don't funny things happen.

The following code is modified from this old post https://groups.google.com/g/comp.lang.fortran/c/z11RW0ezojE?pli=1 to use modern Fortran.

MODULE mymod

CONTAINS

SUBROUTINE set_to_two(B, C)
REAL, DIMENSION(10) :: B
REAL, DIMENSION(5) :: C
INTEGER :: i
DO I = 1, 10
B(I) = 2.0
ENDDO
END

END MODULE

PROGRAM main
USE mymod
REAL, DIMENSION(10) :: A

A = 1.0
CALL set_to_two(A, A(1:10:2))
PRINT *, A

END

Pretty simple code - it takes an array and sets every value to 2.0. For some reason it also takes a second unused irrelevant parameter, but we can ignore that one, surely?

Run that code, as written with current gfortran, and this is the result:

1.00000000  2.00000000  1.00000000  2.00000000  1.00000000  2.00000000  1.00000000  2.00000000  1.00000000  2.00000000  

The code is bad - passing A as two different parameters to a function violates a promise we make as programmers not to alias variables (call them by two different names). But even knowing this we're pretty surprised to get that result! In particular, on my machine, if I call the function with CALL set_to_two(A, A(2:6)) which violates the aliasing rules just as badly, nothing odd happens at all, and all I get is A = 2.0. In this case, the compiler is able to avoid a potentially costly copy as the data is contiguous even though it's a slice.

It's pretty obvious what is actually happening once we know about the copy-back idea. Because the compiler trusted us not to have two names for the same piece of data (the names here being B and C, and the data being A) it happily copies data for the C argument, copying it back at the end. This copy is never affected by the update to B so its content remains 1.0. That gets copied back into A, overwriting the changes we'd made.

This can happen even though we never use C in the function, so nothing actually changes - that second irrelevant argument is not so irrelevant after all.

Take Home Point

The real take-home point here is not to upset your compiler - don't do things wrong and these sorts of details wont matter. But when things do go wrong, it can be pretty helpful to understand what is actually going on.

Honestly a lot of the bugs we write as programmers are a case of miscommunication - what we thought we were writing is not what we actually have said. This is expecially true with modern optimising compilers, which very liberally agree to produce a program that works "as if" it was the one we wrote, and will happily omit things that do not affect the results, or, as in this case, will assume that we are writing correct code and act accordingly.

Fortran is usually a lot nicer to us than C/C++ in terms of undefined behaviour, but as this example shows, it will still do strange things if we break the rules.


December 15, 2023

Gnu Parallel without root

A quick post for future reference: GNU parallel, the command line app to allow parallel running of multiple copies of a program etc, is designed to work just about anywhere. This means we can install it without needing any root access.

DISCLAIMER: This is intended for something like a managed laptop with Linux on. If you are working on shared systems, anything run through parallel would count as computationally intensive - please pay attention to where you are allowed to run such work! Also, check the package or module system for an existing parallel install! It's likely to exist already!

Here's the recipe:

# Do all this in our home directory. If you change to somewhere else, replace $HOME with that path where it occurs

cd $HOME

# Grab that latest source

wget https://ftp.gnu.org/gnu/parallel/parallel-latest.tar.bz2

# We could check the signature of this now, but since we'd grab the sig from where we grabbed the file, there is no point...

# Create an install directory called pllel - change this if you like

mkdir pllel

# Unpack and change to that folder

tar -xvf parallel-latest.tar.bz2

cd parallel-latest

# Configure to install into the directory we just created

./configure --prefix=$HOME/pllel

make

make install

# Now we add the location of the parallel program to our path

# Putting it first (before we repeat $PATH) means it will take priority over any other program called parallel

export PATH=$HOME/pllel/bin:$PATH

# Test everything

parallel echo ::: A B C D


Now, if you want this to be available in all your shells or sessions, you can add the export PATH line to your .bashrc, .bash_profile etc and then you don't have to do that every time.


July 25, 2023

A tricksy bit of C++ templating

Basic C++ Templates

My First Templates

Most people's first encounter with C++ templates is when they want to write a function which can take several types and do the same thing regardless of which. That is, something where the function body ends up the same, but the signatures differ. The templating engine lets you do this really easily, and better still, only the versions of the function you actually use will be created by the compiler which reduces the size of your compiled binary.

Let's make a couple of dummy functions to discuss:

template <typename T>
T myfunc(T);
template <typename T>
void myfunc(const & T);

The first of these is probably the simplest possible template function - we define a name for our 'type' parameter, calling it T, and use T where we would normally use a type such as int, double etc. The line-break is purely convention, but is the common way to write these. We can use T in the signature and/or the body, or we can not use it at all in certain circumstances. This is the 'template' for how the function should look, and if we call it from our code we'll get a version where T is substituted for an actual type, based on, in this case, the type of the argument we pass.

The second block defines a slightly more complex function, where we add some qualifiers to the T type argument, in this case making it a const reference. Note we can overload between T, const T, ref T etc just like we can with explicit types, int and int & being different functions.

Overload resolution between templated functions and between templated functions, non-templated functions and what are called template specialisations are complicated, but mostly can be thought of as going from more to less restrictive - int myfunc(int) would take priority over the templated version for example.

Templates go in Header Files

The fact that T is a placeholder for an actual type is a chunk of why template _definitions_ have to go in header files (excepting so called 'explicit instantiation' which wont discuss here) - the compiler needs the definition to compile the function when it encounters a call to it, and make sure that this function can compile for that type - for instance if the function body passes T to other functions, or calls member functions on it (is T is meant to be a class), are these available? If not, the function cannot be compiled. Since a header file is included (directly or indirectly) in every C++ file which needs it, the full definition is always available during compilation.

Or putting it another way, we can't compile a C++ file containing a templated definition on it's own because we don't know which types we're compiling for, and we certainly don't want to compile for every possible type, even if we could, as this would produce a ton of useless code.

By the way, because templates are not completely 'fleshed out' functions, they are implicitly exempted from the 'one-definition' rule, although properly using include guards etc is still a good idea!

Some More Specific Template Stuff

Template Specialisation

So what if we want to use a different version of a function for, say, integers and class types? Well, we can provide an integer _specialisation_ of the template, which will take priority over the general version, so like

template <typename T>
T myfunc(T in);
template <>
int myfunc(int in);

(Note we have to indicate this is a specialisation using the template<> bit. If we leave that out we get something slightly different and have to consider those overload resolution rules properly - lets not go into the details because they start to hurt).

Suppose in the generic version we use something like 'in.size()' - this wont compile for an integer type. But the compiler doesn't want to do extra work, and is programmed to let an explicit specialisation take priority over a general version, so, having found the specialisation, the compiler stops looking.

Thinking about this for a few moments, we see that a) if this wasn't the case these specialisations would be a bit useless and b) this is actually very powerful - we have potentially uncompilable code, but as long as we never _use_ it we're safe.

Templates For Specific Types

OK, so suppose we have a method that makes sense for int, double, float, long etc, all of the numeric types - we kinda want a template that only allows one of those. We can do it, although we wont show the details, because exactly how is messy and varies between C++11, 14, 17 and newer as this ability has got refined. The trick though, is creating a function template that only works for numeric types, by making something that tries to create an invalid type otherwise. Non-numeric types will fail to substitute, so trying to call our function with these fails - as we desire since this method doesn't make sense.

In some cases we might want to select between methods depending on something like 'is this type numeric' or is it an integer or other stuff. Assuming this technique does something similar to what we just described, we're going to get some options where certain types 'don't make sense' with potential function overloads, but we want to skip those and look at others instead.

This last bit is important - it is a fundamental principle in C++ that templates that "don't work out" don't count as compiler failure. This is "Substitution Failure Is Not An Error" or Sfinae (see other vid, link in description).

This is very powerful, pretty complicated and a big old mess, so lets leave the details alone! Instead, lets look at something neat but clever which we can do in the next section.

Decorator or Wrapper Classes

A Basic Wrapper Class - Function Pass-thru

Suppose we're writing a class which is a bit like a Python decorator - we have some class and we want to encase it in another which adds a bit of function. This is the "Decorator" pattern in design pattern terms too. A common idea is to decorate a single function with e.g. code to time it's execution. The crux is to have two things which are independent, and the ability to combine them - timing code for example would simply execute the function, timing as it goes, and return whatever it returns. It would not and should not care about what the function is, what it does, or how it does it.

A wrapper class is a similar idea, where we want to encase one class in a larger one, and in some sense 'pass through' it's functions. Perhaps we want to change the interface (function names, signatures etc) of a class to make it fit better with calling code (the Adapter pattern), or add defaults or something, and we do not want to edit the wrapped class to do so. Or again, suppose we want to decorate the class with some extra function.

As a pretty dumb but decently useful example, suppose we have some logger class for writing information to file or screen, and we wanted to add a timestamp (imagine our favorite system doesn't have this already). Naively, we might imagine just stringifying everything, then passing those to the logger, having prepended our extra info. But this is clumsy - why restrict ourselves to only taking strings? Better to write wrapper functions that take any type using a template. For instance (semi pseudocode)

class logger; // This comes from some library

class timestampedLogger{
logger myLogger;
string getTimeNow(){...}

template <typename T>
void write(T in){
logger.write(getTimeNow());
logger.write(in);
}
};

We insert a timestamp, but otherwise, we just funnel everything through. In some senses, we're making timestampedLogger substitutable for plain logger, BUT AT COMPILE TIME. Instead of creating a 'logger' we create our wrapped timestampedLogger, and implement all of the same functions, meaning we can simply swap it out. We're not inheriting from logger, which would make us substitutable at runtime, we're replacing it completely at compile time.

Compile time fiddling like this is what templates allow us to do, and do (fairly) cleanly.

Unknown Return Types

Now imagine a slightly more fiddly thing, which is something we've been playing with doing recently - suppose we have some data-storage classes, for example vector or array and we want to wrap them up in something which is aware of physical units (kg, m, s etc). This is very much a decorator pattern, with a slight twist which is that not only do we want to pass-through (certain) member functions of the storage type unchanged, we also add functions dealing with say multiplying one thing with another.

The motivation for this is a kind of enhanced type safety - a length and a time cannot be added together, so lets exploit the type system to enforce this. Ideally this happens at compile time, as our simulations are time-critical and we don't want needless runtime checks for something that, once programmed correctly, wont change. But we might be adding or changing the equations we're solving, so just manually checking things once is insufficient.

The details of doing this are actually pretty mucky and horrible (see the Github Repo for the details, but beware!), but it showed up an interesting tricky bit with wrappers like this - can we write a function which has a return type which is not fixed, nor deducible from its arguments directly, but which depends on the function we are wrapping, in particular what its return type is?

C++14 introduced the idea of an auto function return type. Why should the programmer have to specify something that the compiler can work out? Consider the following:

returnType getLogFilename(){return myLogger.getLogFilename();}

returnType might be a few things, but let's say it's a string. Or could it be a wstring (string supporting wide characters, such as accented or non-Latin)? Why should we have to go rummage in the logger class? And why should our intermediate, our wrapper, have to care if this changed in an update (leaving aside the myriad problems with changing interfaces...).

Obviously the return type of that can't be deduced (worked out) from the arguments - there are none! But it can be worked out from the call and the compiler can and will do this, so we don't have to, if instead of returnType, we put 'auto'.

A Templated Class

So one more twist - suppose we want to wrap one of multiple possible logger classes. Imagine one of these implements a method 'setToRed()' to make output red (e.g. in a terminal). If we write the following function

void setToRed(){myLogger.setToRed();};

this compiles fine for the class which implements that function, but if the logger we are wrapping does not, we get a compile error, whether or not we ever call this function. This is obvious - that code can't compile. So how can we make this available if and only if the backing class offers it?

We might hope that this:

template <typename T>
void setToRed(){myLogger.setToRed();};

would somehow do it - a templated function "isn't compiled until it's used" we think.

But it doesn't. I honestly don't understand the details of what is, and isn't allowed here, because compilers are complicated things and a lot of work goes into making template code compile as fast as possible, since it must be done every time.

But we canuse templating to do this and actually if we write our wrapper "properly" we'll get this ability for free! Before, I assumed we'd just swap out the 'logger' entity rarely, and want only one possible option at any time, so I assumed 'logger' was provided by a library, and I'd maybe use a typedef or using to make sure whatever logging class I was using had that name.

A proper wrapper class would be a templated class, where we can provide information on the logger type when we create it. So we'd do something like this:

template <typename L>
class timestampedLogger{
public:
L myLogger;
string getTimeNow(){...}

template <typename T>
void write(T in){
logger.write(getTimeNow());
logger.write(in);
}

void setToRed(){myLogger.setToRed();};

auto getLogFilename(){return myLogger.getLogFilename();}
};

We've included that 'auto' return type we mentioned, as well as an explicitly templated function - notice we have to name the parameters differently - T and L are both template types, but are unrelated.

Now to actually create one of these we have to do:

timestampedLogger<logger> myTLogger;

or we could offer a shortcut with a 'using' statement:

using tLogger = timestampedLogger<logger>;

Only if we use a function will the full body be handled, so as long as we don't use it, we can leave the setToRed here. We are now relying on the code which calls (or does not call) this function to know if it can or not - but if we're thinking of our wrapper as substitutable for a plain logger class, that was already the case, so there is no real disadvantage there.

With those two layers of templating we can write everything we need to wrap timestamping around our logger. We have to wrap every function we want to expose, but using templates and auto we can minimise how much information we're repeating about those functions - which we should always try to do. The compiler will work all that out once we substitute a real logger class type for that L parameter

Like That Type, But Different

For a brief final foray into the power, but also the attendant horror, of the templating system, lets imagine a more complicated decorator class, where our wrapping functions might change the type returned by functions they wrap.

The units checking project we're working on adds one more twist to this idea - how can we wrap our units wrapper onto something we're returning from a function whose return type we don't know? For instance, suppose we have scalars (single numbers), vectors (physics style so triplets of numbers) and tensors (ditto, so a 3x3 array). Suppose some function goes "down the hierarchy" so returns a lower rank for each, except scalars where it returns the same type.

That is how would we implement the following?

class scalar{
public:
int dat{0};
scalar()=default;
scalar(int val):dat(val){};
scalar exampleFunction(){return scalar(1);}
};

class vector{
public:
int dat[3];
vector()=default;
vector(int a, int b, int c){dat[0]=a; dat[1]=b; dat[2]=c;};
scalar exampleFunction(){return scalar(2);}
};

class tensor{
public:
int dat[9];
vector exampleFunction(){return vector(1, 2, 3);}
};
template <typename ST>
class unitsType{
public:
ST myData;
unitsType<???> exampleFunction(){unitsType<???> val; val.myData = myData.exampleFunction(); return val;}
};

What replaces those '???' ? We can't write anything in there ourselves because it depends on the type of the parameter ST, and crucially is NOT ST itself which would be simple. We can use auto in place of the return type explicitly, but not for the declaration of val. But the compiler knows on some level what type myData.exampleFunction() will return, and we can ask it to help us by spitting that out:

using ReturnTypeExample = decltype((std::declval<ST>()).exampleFunction());

Here decltype is saying 'give me just the type information and declval is taking a type, ST, and creating a sort of compile-time instance of that type so that we can access a member function by name. So we have what we need: unitsType<ReturnTypeExample> is the return value and the type we need to give to val in the body. Perfect.

But I Want to do it Myself

But there's one last twist - what if we have the same desire as before - to allow this for functions that ST may or may not implement? This is actually using the function at compile time to get that return value, AND there's no template substitution going on which might defer that until the wrapper function is actually used. The solution to this is a pretty horrible trick, and I DO NOT KNOW for sure it will work in all circumstances, but it solves this problem:

We have to add another layer of template, so that only when a specific instantiation uses the function will everything unroll and the function actually be needed. Lets name this possibly-nonexistent function nopeFunction() since nope, I didn't implement it...

So we do this:

template <typename Q>
using ReturnTypeNope = decltype((std::declval<Q>()).nopeFunction());
auto nopeFunction(){
unitsType<ReturnTypeNope<ST> > val;
val.myData = myData.nopeFunction();
return val;
}

This works - only when the nopeFunction() here is called does anything attempt to unpack its return type. The auto return type deduction can only happen when the body is compiled too.

As a final note, suppose we wanted things to be a bit clearer, or auto would not work for us for some reason. The following DOES NOT work:

unitsType<ReturnTypeNope<ST> > nopeFunction(){
unitsType<ReturnTypeNope<ST> > val;
val.myData = myData.nopeFunction();
return val;
}

because the return type tries to substitute when this line is reached. We need to keep that deferred layer, so we have to do:

template <typename Q>
unitsType<ReturnTypeNope<Q> > nopeFunction(){

unitsType<ReturnTypeNope<Q> > val; val.myData = myData.nopeFunction(); return val;}

(NOTE we don't have to name this Q, it has no relation to the other Q, it's just handy to re-use the letters for similar purposes where they don't overlap, to avoid running out).

This leaves one FINAL (promise) problem: when calling this wrapper, the type for Q cannot be deduced so we have to do ugly things in the calling code like this:

unitsType<scalar> myTyp;
myTyp.nopeFunction<scalar>();

We fix this with a final twist - a default value for the template parameter Q of ST. Note carefully: this is not defining them to be equal, it is setting a default. We get:

template <typename Q>
using ReturnTypeNope = decltype((std::declval<Q>()).nopeFunction());

template <typename Q=ST>
unitsType<ReturnTypeNope<Q> > nopeFunction(){
unitsType<ReturnTypeNope<Q> > val;
val.myData = myData.nopeFunction();
return val;
}

And now as long as the calling code doesn't call nopeFunction, nothing cares if our ST type implements it or not.

This is, frankly, a bit too deep for sanity, but it does work and is not exactly an edge use-case when dealing with decorators and wrappers.


August 03, 2022

Global data idioms in C++


We've spoken a fair bit about global variables and why they are risky, but sometimes they are the best solution available. For instance, a recent video discussed the idea of "tramp" data, which is passed through certain objects and functions simply to get to another place. This, as we talked about there, has a lot of problems, several of which are close to or identical to, the equivalent problems with globals. So... sometimes the simple fact is, you have to choose your evil, and sticking to broad design principles is not a good motivation for locally bad design. As I quoted in that video:

"But passing global data into every function, whether it needs it or not, imposes a great deal of overhead. This is known as "tramp data", and often reflects a design error. If these things are truly global, then they're global, and pretending that they aren't, in order to make your code "OOP", is a design error."

(Pete Becker, this StackOverflow answer)

So let us agree that there are cases where you have global data and/or objects and that representing them as global state is, in fact, the correct approach. This post is going to discuss some of the options for making this "as least-bad as we can".

Static - more slippery than stable

Static as a C++ keyword is another in the unfortunately long list of C++ keywords with subtly different meanings in different contexts. I am not going to try and explain what it technically means, since that's been done before (e.g. here) and has to be quite technical. In a moment, we'll go through a few uses of it and how they work, and after that the definition might make more sense. Let's just use one small bit of the implications - static on a variable or class member is a way to get a "storage location" (i.e. a variable) which lasts for the entire program. In other words, it's lifetime is the program duration, and as we discussed in another recent videothat is in many ways equivalent to being a global variable.

Because it's "basically a global" we have to caution against static (unless also const) in any code which is, or might in future be, multithreaded. Global mutable (able to be changed, aka not const) state is the anathema of threading. In nearly every case, you do not want to try and handle the combination!

Definition, Declaration and the One Definition Rule

C++ requires you to do two things so that a variable or function "works". Firstly, you need to have declared what it looks like, so that all parts of the code know how to handle it. This means giving a variable a type, or a function a prototype - so that a variable access or function call has the right "pattern" - knows how big something is, how many parameters it has, etc. But you also have to define, or "fill in" what something actually is - create the storage space (memory) for a variable, or define the body of the function.

For variables, expecially if you have been avoiding globals, you may have never encountered this because in most cases the declaration and definition are the same - int i;does both. Do note that this is nothing to do with initialising or giving a value to a variable. The only time they become separate is in certain cases where we want multiple parts of the code to be able to use the same variable, such as for globals. Method 1 below shows how we can do this.

For functions, it is a bit more familiar - to call a function we need to have access to its prototype, and for compilation to complete and the program to run, it has to be given a body that can actually execute. For simple functions, this is why we normally declare prototypes in a header, and function bodies in a cpp file. If we put the body in a header, and include that header several times, we get errors telling us the function is multiply defined. For classes though, we can happily define function bodies in a class definition without any problem. What's happening here?

First, lets just clarify what including a header does - it pretty literally dumps the code from the included file into the including file. This is old stuff - the compiler has nothing to do with it, as it happens at the "pre-processing" step while the code is still just text. By the time the compiler sees it, the definition you wrote might appear several times, and there is no way to know that they came from the same place.

Now, the problem arises because C++ disallows certain forms of ambiguity. Suppose a function could be defined in several places - and suppose these were different! Imagine the chaos if one form were used in some places and another in others. Or might the compiler to expected to pick one? Disallowing this is mostly a Good Thing, but sometimes it has issues. For instance, being unable to define any functions in a header would rule out "header-only" libraries, where you simply include them and it works. It would also severely limit templating.

Not being able to define something multiple times is called the One Definition Rule. Because of the drawbacks just mentioned though, there are several places where it is allowed to be violated with the strict requirement that the programmer takes care of the risks. That is, you may be allowed to have a repeated definition, but if it is not the same everywhere, that is your problem (and mostly undefined behaviour, so a Very Bad Thing). The details aren't important here, and it's enough for us to suppose that where the benefits were sufficient, the rule was allowed to be broken.

Inline

Inline is a special keyword, which, yet again, has a complicated history. Originally, "inline" was a hint to the compiler that a function should be inlined (think - pasted into place in source code, rather than executed with a jump). In order to do this, the compiler would need to have access to the function body in all the places in the code it might be used. This meant the function had to be fully defined in an included header (strictly, not true, but good enough for us), which violated the One Definition Rule. Inline was considered useful, thus the rule had to be bent - C++ generally tries to allow useful things where it can (where the compiler writers think they can make it work). Once inline was allowed to bend the rule, people used it for that purpose, such that now it pretty much only is used for this case.

Since a class member function defined within the body of a class can only really be meant to be inline, this was done by default too, hence why you may never have seen this keyword. Note that it applies to functions only - and see Example 1b below for the newer extension to inline - inline variables.

Global state example in C++

Let's assume we really do have some data which is accessed in many parts of our program, both for read and write, and that we have already decided this is the best design. Perhaps it is used so widely that we would end up passing it endlessly, usually as that "tramp data". Once it's passed, it's accessible, and only some very horrible tricks (const_cast...) can allow us to restrict where it can be changed. It's already global in implication, why not admit this and allow it to be global in design.

What options do we have to do this, and what are their pros and cons?

Method 1 - A "simple" global variable

For simple data, such as a single integer, we can use the simplest "global variable" idiom - that is we want a variable whose declaration ("pattern") is available to all parts of our program, which is defined (actually created) once and only once in our code. This means we can't do what we might first think of and just create it in a header file because that would define it several times and our compiler will complain about an ambiguous name. Nothing clever we can do with include guards or trickery can avoid this. What we can do though, is to declare it in a header, and define it in a cpp file, like this:

file.h:

#ifndef file_h
#define file_h
extern int my_var;
#endif

and in file.cpp

#include "file.h"
int my_var = 10;

So what does all that mean? Our header file uses standard include guards (we'll leave these out in future) which stop the header being included repeatedly. We have our my_var variable, an int, and we declare it "extern". This tells the compiler that there will be a definition for my_var by the time the code is linked (if you're not too familiar with linking, think: compiled files combined into an executable). This means all of the parts of the code which use this header will happily compile using what they know my_var has to look like, and not worry about where it might be actually created. Then, the C++ file does the actual creation. This can happen only once, or we would risk having two separate variables with the same name.

This is the simplest idiom to get what we want, but has several problems. It's a bit confusing: extern is an old keyword that many people will never encounter. We have to go looking for the actual initialisation step, and verify that we set a value. In a lot of cases, we might have no other reason to have the file.cpp file except to instantiate one variable - and our only alternative there is to demand it be defined in some other of our cpp files, which is confusing, risky and all around a Bad Thing.

Lastly, a name as generic as my_var has now been "claimed" throughout our entire program and we re-use it at our peril. Shadowing, where a local variable "covers up" a global one, is always confusing. What would this snippet do, for instance?

int main(){
std::cout<<my_var<<std::endl;
{ int my_var = 11;
std::cout<<my_var<<std::endl;
}
std::cout<<my_var<<std::endl;
}

Actually, the problem is a bit worse than this, because only files including our header will see my_var, and you can probably work out why this can be a maintenance nightmare if suddenly code changes mean two names which had been separate begin to collide!

Method 1a - A namespaced global

To avoid the shadowing issue, and improve this solution from "pretty awful" to "alright", we can at least restrict our variable name to a namespace. If we are careful with our namings and dividing things into coherent sets, we can quite usefully indicate more about our variable, and make it easier to find places that related variables are, or should be, changed. For example

namespace display_config{
extern int width;
extern int height;
}

and

int display_config::width = 720;
int display_config::height=1080;

Generally, if we're changing our "display height" we'd also expect the width to be changed, and we now have some ability to spot this.

Method 1b - [C++ 17 or newer] This, but better

From C++ 17, this kinda obviously common and useful idiom is supported much more, by the addition of "inline" variables. We discussed inline above for functions, and this is very similar. The definitions must match, but in our case of there being only a single actual line, which is included several times, this is fine. We end up with the much more elegant looking:

namespace display_config{
inline int width=1080;
inline int height=720;}

This has several advantages - not needing a cpp file definition, being able to see at a glance that our variables are initialised (instead of having to look in two places), and having one less keyword/idiom to remember. However, C++17 is perhaps a little too new to use without at least considering where you will be compiling/running your code, as you might have to put in a request for a compiler update for a few years yet.

Method 2 - A class with static members

At the point where we're talking about linking variables to each other, we are into the territory where a class becomes a good solution. This would let us, for example, relate the setting of height and width explicitly. The obvious step from our namespace example, to a basic class is this:

file.h

class display_data{
static int height;
static int width;
};

and file.cpp

int display_data::height=720;
int display_data::width=1080;

Now this is rather different looking and the keywords have changed. We no longer have any extern, but we have introduced static, and we have these slightly unexpected definitions without which we get compiler errors telling us the variable does not exist.

A static class member variable is shared by all instances of the class. If we read or write to it, this must have the same effect (in terms of how the bits in memory are changed) whichever class instance we would use - thus we don't need to specify. In fact, we don't need to have any instance, we can access those variables as display_data::height from anywhere. This sort of explains why the second bit is needed - if they're not associated with any class instance, the variables have to be "created" and given storage space somewhere. As before, we need those to be in some cpp file and often they are the only thing there, which is ugly.

So this also has problems. Firstly - if our class has only static members, that's weird, and often considered an "anti-pattern". It's a heavy solution and goes against some of the motivation for objects. However, if we want to control the setting of height and width (forcing them to be non-negative for instance), we can get the compilers help to enforce this by making them private and providing setter and getter functions.

On the whole though, entirely static class members is an oddball solution, and I'm not sure where it's really useful.

Method 3 - A static class instance

Going back a bit, our display has sort of taken on an "object like" existence, with several items of data and methods that act on them. This object really could have several instances, it's just that for our specific program, we want to have a global one referring to "the display". This is far more naturally represented by a global instance of a regular class, so we can use the first approach with a user-defined class rather than a plain int, and get some benefits.

However, we still have most of the drawbacks of global-ness, so we do need some strong motivation. And here we additionally have all of the drawbacks of method 1 like needing the C++ file. Moreover, there is a thing called the Static Initialisation Order Fiasco if we have interaction between static objects, and a lot of other stuff gets really messy.

We mention this option for completeness, but would probably never recommend it.

Method 4 - Function Local Statics

Possibly the best approach to allow us to have a single global class like the previous method, but more safely, is to exploit function-local static variables. These are a lot like global statics, but their scope is restricted to the function where they "live". This clears up a lot of the issues from Method 1 and is much, much better. We do this (showing the function body only, there would also be a header containing prototype):

int get_display_height(){
static int the_display_height=720;
return the_display_height;}

That lets us have this global variable, but gives us no way to set it. It's easy in this case to find a sentinel value to let us do this to fix that:

int get_display_height(int new_val=-1) //In header
int get_display_height(int new_val){
static int the_display_height=720;
if(new_val != -1){the_display_height = new_val;}
return the_display_height;}

but not all things have a sentinel.

In those cases, and some others where a sentinel is not appropriate, we could instead return a reference or pointer to the variable, such as this:

int * get_display_height(){
static int the_display_height=720;
return &the_display_height;}

This is perfectly valid for a static function local, in a way which it is absolutely not valid for a regular function local variable. Yet again, we're running into confusing idioms and must tread carefully! Worse still, we've now opened up setting of our variable to the entire code and can no longer restrict values to be positive, or anything else!

Worse still, a casual reading of those snippets might miss that the initialisation to 720 is done only the first time the function is encountered. Any subsequent calls refer to the same variable, the_display_height, but the setting is not redone. In this case, we absolutely rely on that behaviour, but if it is something more complicated like a heap allocation it can really confuse you.

Method 5 - A Singleton Class

The final method we'll discuss here is the most heavy weight, but can be really useful. Suppose we really do have a class containing data and operations on it, and we want there to exist precisely one of them. This will be our global object. Much like the previous idiom, we'll have the actual storage location be a function-local static variable. So we will do something like this:

static theClass * theClass::get_instance(){
static theClass * the_instance = new theClass();
return the_instance;}

Anywhere that we want to use the class, we simply get it using theClass::get_instance()->blah.

There's one more trick we need, because we said that we want only one of these to exist - which is to privatise the constructor for theClass so that only the one in this function can be created. Voila, a single entity! This is why the function above is a class-member function, so that it and only it is able to call the constructor.

This is generally know as the Singleton Pattern and is very handy. However, be aware that it does involve statics and shared entities and so must be handled carefully. In particular, you must make sure either all operations on the class leave it in a valid state, or two parts of your code might both use it and confuse each other - obviously this gets a lot more pressing in multithreaded code, but even serial code can have the problem.

Conclusion

Global data, if it truly is global, is global however you code it. You have to use caution, but whether you pass it about, or use one of these tricks, your data is subject to change in multiple places. Be wary. Use every tool the compiler gives you, such as const, and block scoping, and only globalise things that are worth the headache!


June 06, 2022

Wherfore art thou – variable names matter!

Sometimes, the simplest questions are the hardest to answer. For instance - what is the meaning of the word "the"? If you've never thought about this, have a go. If you'd prefer a non-grammar, or non-English specific example, try to describe the number '1'. Trickier than you'd think, isn't it?

But that is hardly relevant to programming, so lets look at today's deceptively simple question instead. Namely, how long should variable and function names be?

Some people might seem to have an answer to this question, such as "between one and five words, usually two or more". They are wrong. Any answer containing numbers is unhelpful and either sometimes wrong, or too broad to mean anything. For instance, "Supercalifragilisticexpialidocious" [typed from memory... excuse spelling] is one word, and is terrible. And "SolveWaveEquationSecondOrderWithLimiter" is seven and (in the right context) is quite good. "FlagSettingWhetherWeShouldPrintTheAnswerToScreenOrNot" is clearly terrible - but it is thoroughly descriptive.

So why do we struggle so much to answer this question? Because we are in a sort of "tug-of-war" between two competing interests, and which one pulls a little harder depends on a lot of things. Both ends of the rope are anchored in clarity: on the one hand we want maximum clarity for the function - a very descriptive name. This tends to favour longer names, with more detail. On the other hand, we want maximum clarity for the code as a whole - a name that doesn't take up too much mental space in a block of code. This tends to favour shorter names, with fewer, simpler words. To find our optimum, we must somehow balance these two.

A brief aside here into our absolute best weapon in this - make the two ends stop pulling. Find a way to make "descriptive" and "compact" the same thing. Specialised definitions of terms specific to a field of interest, i.e. jargon, really works in our favour here. We must use it with caution, because new jargon can be very jarring and difficult to master, but in general jargon terms are addressing exactly the same problem - descriptive yet compact terms. For instance, a word you may barely think of as jargon - "laptop". Let us expand this definition - perhaps we get "a portable, all-in-one computing device". But we still have some specialised terms here - what is "all-in-one"? What is "computing"? We could continue the expansion almost indefinitely. Or, we can simply use the term "laptop" and in most cases be perfectly understood.

It is important to flag here that word, "most". Is a tablet a laptop? If I strap a monitor to a tower PC and plug in a keyboard, do I have a "laptop"? It's going to depend on context whether those are reasonable (OK, the second one is extremely unlikely to be). But consider similar questions - "do you have a car?" "No, I have a van" - sensible answer, or irritating pedant? "We need to seat 5 people, who has a car?" "I have an MX5" - as it turns out, "car" doesn't always mean 5 seats.

Back from our aside now, new weapon in hand. Careful, selective use of jargon can make our names very descriptive, while remaining compact. If the jargon we use is not universal, we can provide a glossary or add context in our docs. We should be very very careful about making up our own jargon here, because unfamiliar terms can lead to misunderstanding, but if your field provides words, exploit them.

One last important point - sometimes an otherwise optimal name is not useful due to ambiguity. This can be typographical - we should never have names that differ only in characters like '1' and 'l' or differ only in the case (small or CAPITAL) of the letters. Ambiguity can be similarity with some other function name - imagine we somehow tried to have "GetDiscreteName" and "GetDiscreetName" - could you even tell those apart? Ambiguity is an enemy of clarity - avoid it.

OK, actually there is one final point. Clearly redundancy can only harm our compactness of names. But it does have some applications. We might want two functions with very similar names, but different parameters - "SolveTypeAEquation(a, b, c)" and "SolveTypeAEquationNormalised(a, b, param)" for instance. If we can't design the code to make these less confusing, we might deliberately make one name less individually clear, if it makes its usage clearer. So perhaps the second one becomes "SolveTypeAEquationNormalisedWithParam(a, b, param)" which repeats information already in the signature (redundancy), but helps us keep in mind which function we want.

Short posts like this rarely have conclusions, because everything has already been said, usually repeatedly. Since repetition is another form of redundancy that actually works though, lets restate our key point:

We must balance the demands of clarity between our functions and our wider code and try and find a name which is BOTH descriptive and also short and easy to handle. Sometimes one of these will be a bit more important, sometimes the other.


December 2024

Mo Tu We Th Fr Sa Su
Nov |  Today  |
                  1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31               

Search this blog

Tags

Galleries

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV