Many traditional consulting companies help their clients to answer key questions. At 71point4 we focus on helping our clients to build the capacity to ask and answer questions with the data at their disposal.
Rather than delivering once-off analysis, our work focuses on creating automated data pipelines that keep on giving. We leverage our extensive experience across the data value chain – from big data architecture and data engineering to data analysis – to deliver data solutions that inform decision-making on an on-going basis.
A data pipeline comprises data collection, transformation, analysis and dissemination, as illustrated in the schema below. It incorporates both defensive and offensive elements of data strategy, ensuring that data is accurately collected and captured, anonymised and optimised to create a golden copy of the data that can be analysed and disseminated in line with user permissions and needs. These needs might be fairly standard, easily met by static reports or dashboards with some drill-down functionality. Alternatively, needs might be more complex, requiring flexible analysis of underlying data using fit-for-purpose analytical tools and approaches.
The more these processes can be automated, the less scope there is for human error – whether that be in data capturing or transformation, or in the calculation of key metrics. In addition, automation helps to embed and standardise data processes. Data science teams can focus on adding value through innovation and bringing more data into the pipeline rather than on cleaning data or preparing standard reports.
Our software and technological stack has been called ‘edge cutting’. And it is. We use open-source and free-to-use products like MariaDB and Greenplum databases supported with SQL, R and Python programming languages . This stack gives us the flexibility to custom-build solutions that meet our clients’ needs, without being shackled by the excessive costs and often limited flexibility of proprietary software products. The stack also provides a foundation that can be developed by clients as their needs change over time. For this reason, training and on-the-job capacity building and mentoring for our clients is central. This ensures continuity and sustainability beyond the term of our work.
Aside from building deep technical skills, we also focus on enabling our clients to become data driven. We run data literacy workshops with business units that generate and use data, providing non-technical teams with a data vocabulary and an understanding of the processes and mechanisms that transform raw data into knowledge and insights. These workshops facilitate more productive interactions between technical and non-technical teams. They also create a greater awareness of the power of data to support operational and strategic decision-making.
As an end-to-end consultancy – focusing as much on analysis as on engineering – we are well-placed to do this. We have a deep understanding of the broader data universe beyond internal data, including qualitative data, survey data and other publicly available administrative, geographic and social media data. We can therefore help our clients identify useful data sources that describe the market beyond their current reach. We also have a strong understanding of industries we work in, so we know what questions to ask and how best to answer them.
In addition, through our clear and dare-we-say beautiful data visualisations, we develop engaging outputs that invite discussion and debate.
The bottom line: we train by example, hoping to create the same fascination with data and joy in analysis that motivates us to get up and go to work every day.