NiHu  2.0
General Techniques

Table of Contents

Store pattern

The store pattern is used to automatically create static instances of class template specialisations. The pattern is simply implemented as follows:

template <class C>
struct store
{
static const C m_data;
};
template <class C>
const C store<C>::m_data;

The store pattern's usage is explained with a simple example. The following code snipped defines a cache template that stores 1000 values of an integral type in a dynamically allocated array. The cache class is indexable using the overloaded index operator.

template <class T>
class cache
{
public:
cache() {
std::cout << "cache constructor " << typeid(T).name() << "\n";
m_ptr = new T[1000];
for (size_t i = 0; i < 1000; ++i) m_ptr[i] = i;
}
~cache() {
std::cout << "cache destructor " << typeid(T).name() << "\n";
delete [] m_ptr;
}
T const &operator[](size_t idx) const {
return m_ptr[idx];
}
private:
T *m_ptr;
};

The following main function uses the store pattern to get two elements from cache<int> and one from cache<char> This is simply accomplished by instantiating the store template with cache<int> and cache<char> and use their static member m_data as follows:

int main(void)
{
std::cout << NiHu::store<cache<int> >::get_data()[5] << std::endl;
std::cout << NiHu::store<cache<int> >::get_data()[25] << std::endl;
std::cout << NiHu::store<cache<char> >::get_data()[33] << std::endl;
return 0;
}

The code's output is the following:

cache constructor i
cache constructor c
5
25
!
cache destructor c
cache destructor i

Apparently, both cache's are automatically constructed once at the beginning of the program and are destructed at the end. This property is useful when different cache's are extensively used throughout different segments of a compilation unit. The programmer does not need to define the static cache instances, the necessary constructors and destructors are called automatically and only once at the beginning and the end of the program.

This technique is extensively used in NiHu to conveniently define accelerator caches containing quadratures and precomputed weighting functions.