Techno Blender
Digitally Yours.

The Sampling Nomogram, Simply Explained | by Fouad Faraj | Jun, 2022

0 101


A brief introduction with a practical mining application and relevant code

Sampling nomogram. Image by author

In industries where element concentrations from particulate materials are relevant for decision making such as mining or environmental applications, it’s crucial that representative samples are taken from the volume or lot being considered. If poor sampling practices take place erroneous decisions can be made since typically a tiny proportion of the lot is sampled. For example, ore/waste limits in mining or areas with high/low contaminant concentrations for environmental applications could be erroneously defined.

The figure below illustrates an example where a final sample of 50 grams are taken from a lot of 400 kg of rock, to ensure the 50 grams is representative of the 400 kg, appropriate crushing and subsampling strategies are required following the widely accepted Gy’s Theory of Sampling.

Schematic illustrating a 400 kg lot from which representative subsamples are taken and crushed to characterize the lot. Image by author

Commercial software is available to assist users such as miners and environmental scientists with the construction of sampling nomograms based on Gy’s Sampling Theory. Sampling nomograms are used to inform sampling decisions or operations however this software can be expensive and abstract important details running on the back end. Gy’s sampling nomogram is actually quite straightforward and can be easily constructed in Python. With a bit of work, the black box can be revealed to allow for better interpretations to be made given the increased understanding of what is being done in the back end.

Presented here is a brief introduction to Gy’s Theory of Sampling with a practical mining application and code to construct and interpret a sampling nomogram.

The fundamental error as defined by Gy’s sampling theory [1] is the minimum sampling error assuming the sampling practices are perfect. It’s a measure of the relative variance from which the square root is typically taken to get the relative standard deviation which is like the coefficient of variation. The relative standard deviation is usually expressed as a percentage and can then be used to determine if the theoretical sampling accuracy is good enough for the application at hand.

The fundamental error equation is made up of six unique and distinct factors as shown and briefly described below.

Equation for Gy’s Relative Variance of the Fundamental Error. Image by author

There are various ways that people have proposed to quantify each variable. Following is a brief explanation for each one but the optimal method for quantifying the variables depends on the application at hand.

The nominal particle size, d

The nominal particle size approximates the size of the largest particle in the sample measured in cm. A common definition is the 95% passing particle size, which is the size for which 95% of the sample mass would pass. Typically this value is estimated by examination but needs to be very carefully selected ideally with particle sizing sieve tests as it has a cubed effect on the fundamental error.

Sample mass, Ms

The mass of the sample measured in grams. Increasing the sample mass will reduce the fundamental error as a greater proportion of the lot is being represented. Often times the fundamental error equation is rearranged to solve the sample mass required for a certain acceptable error when the sampling methodology is operationally constrained.

The mineralogical factor, c

Also known as the composition factor, the mineralogical factor describes the maximum heterogeneity which is reached when the material of interest is completely liberated from the gangue material. The mineralogical factor is measured in density units of g/cm³ and can be estimated using the equation below.

Equation for the mineralogical factor. Image by author

Where aL is the unitless decimal proportion, λm is the density of the particles containing the analyte, and λg is the density of the gangue.

The liberation factor, l

Many different formulas have been proposed to quantify the liberation factor which is a correction for the assumption made with the mineralogical factor. Essentially as the material becomes more heterogeneous the liberation factor gets closer to 1 at which point the analyte of interest would be completely separate from the gangue material. On the lower end at 0.05 the material would be very homogeneous with the gangue and analyte co-existing on the same particle.

Gy proposes the liberation factor can be estimated by taking the square root of the liberation diameter for the analyte of interest divided by the particle’s diameter.

The shape factor, f

Also known as the coefficient of cubicity, the shape factor is a dimensionless value representing how close a particle’s shape matches that of a cube. Particles closer to the shape of a cube have an f=1, particles that more closely resemble a sphere have an f=0.5, and particles closer to flaky materials have an f=0.1. The shape factor can also be calculated by dividing the particle’s volume by its diameter cubed.

For samples consisting of particles with multiple shapes, the best practice is to use the average shape factor of the analyte of interest.

The granulometric factor, g

Also known as the particle size distribution factor, this dimensionless variable takes varying particle sizes into account for estimating the fundamental error. A common way to estimate this parameter is using the discrete table below with descriptions of the different values of the granulometric factor as proposed by Gy (1998) [2]:

  • g = 0.25: Undifferentiated, unsized material (most soils).
  • g = 0.40: Material passing through a screen.
  • g = 0.50: Material retained by a screen.
  • g = 0.60–0.75: Material sized between two screens.
  • g = 0.75: Naturally sized materials, e.g., cereal grains, certain sands.
  • g = 1.0: Uniform size (e.g., ball bearings).

Say we have a 2D site of 8 by 4 units with an elliptical area of high copper grade as shown in the figure below. The area of elevated copper grade would be the lot that we want to representatively sample and get a grade for. The flow chart also shown below illustrates the sampling strategy which takes place that is taking ten 40 kg samples randomly from within the lot boundaries for a total sample mass of 400 kg. Then the 400 kg sample goes through two stages of crushing or comminution and subsampling by using tools like rotary or riffle splitters for a final sample of 0.5 kg which gets sent to a lab for assay to determine the grade.

2D site of 8 by 4 units with an elliptical area of high copper grade and flow chart showing the sampling strategy. Image by author

We’ll test if the sampling is representative by achieving an uncertainty or relative standard deviation of <5% using Gy’s sampling theory.

Theoretical Sampling Accuracy Results

Using some assumed inputs, the nomogram below is output using matplotlib from this script on github. The main steps labeled in the nomogram below are also briefly described here:

  • A to B primary crushing of the 400 kg sample from a nominal size of 10 in to 1 in
  • B to C subsampling the 400 kg to 20 kg at 1 in
  • C to D secondary crushing the 20 kg sample from a nominal size of 1 in to 0.25 in
  • D to E subsampling the 20 kg to 0.5 kg which gets sent to a lab for assay
  • E to F pulverization of the final 0.5 kg sample and preparation for assay
Sampling nomogram with different points labeled showing the iterative crushing and subsampling of a 400 kg sample at a nominal particle size of 10 in to a final sample of 0.5 kg at a nominal particle size of 0.03 in. Image by author

Shown below are the calculations for the total fundamental error obtained by summing the relative variances at each stage of subsampling (B to C and D to E in this example).

Assumptions and calculations for the total relative standard deviation by summing the relative variances at each stage of subsampling from BC and DE. Image by author

The resulting relative standard deviation is 4.8% which is under the 5% target.

Advantages of using Python instead of Commercial Software

The same output could also be achieved with an off-the-shelf software package. However, knowing the details and testing with code in Python could allow the practitioner to easily simulate different scenarios. For example, say the target in this example was a relative standard deviation of 3%. The practitioner could test different higher sample masses or smaller nominal particle sizes to achieve a lower fundamental error. Alternatively, if the relative standard deviation target is a more flexible 10%, different scenarios could be tested to determine which minimizes the required sampling resources.

A solid understanding of Gy’s sampling theory is important to ensure representative samples are taken of the volume or lot being considered. Determining the factors to quantify Gy’s fundamental error is challenging enough but the actual construction of the sampling nomogram and calculation of the fundamental error is actually quite simple and can be done in Python. We shouldn’t always have to rely on expensive software and black box solutions when simple open source options are available.

Theoretical methods such as Gy’s Sampling Theory are great for informing decisions made in the field however it is crucial to consider the operational difficulties or constraints which may be present. Similarly, decision-making purely on operational ease could lead to erroneous results if there are no theoretical considerations for the procedure. Ultimately, a balance is required between the theoretical and practical knowledge so that an optimal result can be achieved which may not be the best theoretical solution or easiest to implement but will provide the best solution given the available resources and constraints.

The figure below provides a rough illustration of how optimal productivity is achieved when balancing theoretical knowledge with practical operational-based experience. Either extreme obsessing over every tiny theoretical detail or just doing what is most practical would result in suboptimal productivity.

Schematic illustrating the productivity optimum when an adequate balance is achieved between the time being productive and the time & resources spent improving procedures and productivity. Image by author

In our example with sampling the high copper grade lot perhaps the theoretical solution would be to sample ten times more and crush everything to 0.1 in, this becomes challenging operationally and perhaps impossible without the required equipment. Similarly, just grabbing one rock of 1 kg would very likely give an incorrect result but would be easy to do in practice. Thus a balance between the two would be required to achieve the optimal result.

[1] Gy, P.M., 1976. The sampling of particulate materials — a general theory. Int. J. Miner. Process., 3: 289–312.

[2] Gy, P.M. 1998. Sampling for Analytical Purposes. Wiley, Chichester, UK.


A brief introduction with a practical mining application and relevant code

Sampling nomogram. Image by author

In industries where element concentrations from particulate materials are relevant for decision making such as mining or environmental applications, it’s crucial that representative samples are taken from the volume or lot being considered. If poor sampling practices take place erroneous decisions can be made since typically a tiny proportion of the lot is sampled. For example, ore/waste limits in mining or areas with high/low contaminant concentrations for environmental applications could be erroneously defined.

The figure below illustrates an example where a final sample of 50 grams are taken from a lot of 400 kg of rock, to ensure the 50 grams is representative of the 400 kg, appropriate crushing and subsampling strategies are required following the widely accepted Gy’s Theory of Sampling.

Schematic illustrating a 400 kg lot from which representative subsamples are taken and crushed to characterize the lot. Image by author

Commercial software is available to assist users such as miners and environmental scientists with the construction of sampling nomograms based on Gy’s Sampling Theory. Sampling nomograms are used to inform sampling decisions or operations however this software can be expensive and abstract important details running on the back end. Gy’s sampling nomogram is actually quite straightforward and can be easily constructed in Python. With a bit of work, the black box can be revealed to allow for better interpretations to be made given the increased understanding of what is being done in the back end.

Presented here is a brief introduction to Gy’s Theory of Sampling with a practical mining application and code to construct and interpret a sampling nomogram.

The fundamental error as defined by Gy’s sampling theory [1] is the minimum sampling error assuming the sampling practices are perfect. It’s a measure of the relative variance from which the square root is typically taken to get the relative standard deviation which is like the coefficient of variation. The relative standard deviation is usually expressed as a percentage and can then be used to determine if the theoretical sampling accuracy is good enough for the application at hand.

The fundamental error equation is made up of six unique and distinct factors as shown and briefly described below.

Equation for Gy’s Relative Variance of the Fundamental Error. Image by author

There are various ways that people have proposed to quantify each variable. Following is a brief explanation for each one but the optimal method for quantifying the variables depends on the application at hand.

The nominal particle size, d

The nominal particle size approximates the size of the largest particle in the sample measured in cm. A common definition is the 95% passing particle size, which is the size for which 95% of the sample mass would pass. Typically this value is estimated by examination but needs to be very carefully selected ideally with particle sizing sieve tests as it has a cubed effect on the fundamental error.

Sample mass, Ms

The mass of the sample measured in grams. Increasing the sample mass will reduce the fundamental error as a greater proportion of the lot is being represented. Often times the fundamental error equation is rearranged to solve the sample mass required for a certain acceptable error when the sampling methodology is operationally constrained.

The mineralogical factor, c

Also known as the composition factor, the mineralogical factor describes the maximum heterogeneity which is reached when the material of interest is completely liberated from the gangue material. The mineralogical factor is measured in density units of g/cm³ and can be estimated using the equation below.

Equation for the mineralogical factor. Image by author

Where aL is the unitless decimal proportion, λm is the density of the particles containing the analyte, and λg is the density of the gangue.

The liberation factor, l

Many different formulas have been proposed to quantify the liberation factor which is a correction for the assumption made with the mineralogical factor. Essentially as the material becomes more heterogeneous the liberation factor gets closer to 1 at which point the analyte of interest would be completely separate from the gangue material. On the lower end at 0.05 the material would be very homogeneous with the gangue and analyte co-existing on the same particle.

Gy proposes the liberation factor can be estimated by taking the square root of the liberation diameter for the analyte of interest divided by the particle’s diameter.

The shape factor, f

Also known as the coefficient of cubicity, the shape factor is a dimensionless value representing how close a particle’s shape matches that of a cube. Particles closer to the shape of a cube have an f=1, particles that more closely resemble a sphere have an f=0.5, and particles closer to flaky materials have an f=0.1. The shape factor can also be calculated by dividing the particle’s volume by its diameter cubed.

For samples consisting of particles with multiple shapes, the best practice is to use the average shape factor of the analyte of interest.

The granulometric factor, g

Also known as the particle size distribution factor, this dimensionless variable takes varying particle sizes into account for estimating the fundamental error. A common way to estimate this parameter is using the discrete table below with descriptions of the different values of the granulometric factor as proposed by Gy (1998) [2]:

  • g = 0.25: Undifferentiated, unsized material (most soils).
  • g = 0.40: Material passing through a screen.
  • g = 0.50: Material retained by a screen.
  • g = 0.60–0.75: Material sized between two screens.
  • g = 0.75: Naturally sized materials, e.g., cereal grains, certain sands.
  • g = 1.0: Uniform size (e.g., ball bearings).

Say we have a 2D site of 8 by 4 units with an elliptical area of high copper grade as shown in the figure below. The area of elevated copper grade would be the lot that we want to representatively sample and get a grade for. The flow chart also shown below illustrates the sampling strategy which takes place that is taking ten 40 kg samples randomly from within the lot boundaries for a total sample mass of 400 kg. Then the 400 kg sample goes through two stages of crushing or comminution and subsampling by using tools like rotary or riffle splitters for a final sample of 0.5 kg which gets sent to a lab for assay to determine the grade.

2D site of 8 by 4 units with an elliptical area of high copper grade and flow chart showing the sampling strategy. Image by author

We’ll test if the sampling is representative by achieving an uncertainty or relative standard deviation of <5% using Gy’s sampling theory.

Theoretical Sampling Accuracy Results

Using some assumed inputs, the nomogram below is output using matplotlib from this script on github. The main steps labeled in the nomogram below are also briefly described here:

  • A to B primary crushing of the 400 kg sample from a nominal size of 10 in to 1 in
  • B to C subsampling the 400 kg to 20 kg at 1 in
  • C to D secondary crushing the 20 kg sample from a nominal size of 1 in to 0.25 in
  • D to E subsampling the 20 kg to 0.5 kg which gets sent to a lab for assay
  • E to F pulverization of the final 0.5 kg sample and preparation for assay
Sampling nomogram with different points labeled showing the iterative crushing and subsampling of a 400 kg sample at a nominal particle size of 10 in to a final sample of 0.5 kg at a nominal particle size of 0.03 in. Image by author

Shown below are the calculations for the total fundamental error obtained by summing the relative variances at each stage of subsampling (B to C and D to E in this example).

Assumptions and calculations for the total relative standard deviation by summing the relative variances at each stage of subsampling from BC and DE. Image by author

The resulting relative standard deviation is 4.8% which is under the 5% target.

Advantages of using Python instead of Commercial Software

The same output could also be achieved with an off-the-shelf software package. However, knowing the details and testing with code in Python could allow the practitioner to easily simulate different scenarios. For example, say the target in this example was a relative standard deviation of 3%. The practitioner could test different higher sample masses or smaller nominal particle sizes to achieve a lower fundamental error. Alternatively, if the relative standard deviation target is a more flexible 10%, different scenarios could be tested to determine which minimizes the required sampling resources.

A solid understanding of Gy’s sampling theory is important to ensure representative samples are taken of the volume or lot being considered. Determining the factors to quantify Gy’s fundamental error is challenging enough but the actual construction of the sampling nomogram and calculation of the fundamental error is actually quite simple and can be done in Python. We shouldn’t always have to rely on expensive software and black box solutions when simple open source options are available.

Theoretical methods such as Gy’s Sampling Theory are great for informing decisions made in the field however it is crucial to consider the operational difficulties or constraints which may be present. Similarly, decision-making purely on operational ease could lead to erroneous results if there are no theoretical considerations for the procedure. Ultimately, a balance is required between the theoretical and practical knowledge so that an optimal result can be achieved which may not be the best theoretical solution or easiest to implement but will provide the best solution given the available resources and constraints.

The figure below provides a rough illustration of how optimal productivity is achieved when balancing theoretical knowledge with practical operational-based experience. Either extreme obsessing over every tiny theoretical detail or just doing what is most practical would result in suboptimal productivity.

Schematic illustrating the productivity optimum when an adequate balance is achieved between the time being productive and the time & resources spent improving procedures and productivity. Image by author

In our example with sampling the high copper grade lot perhaps the theoretical solution would be to sample ten times more and crush everything to 0.1 in, this becomes challenging operationally and perhaps impossible without the required equipment. Similarly, just grabbing one rock of 1 kg would very likely give an incorrect result but would be easy to do in practice. Thus a balance between the two would be required to achieve the optimal result.

[1] Gy, P.M., 1976. The sampling of particulate materials — a general theory. Int. J. Miner. Process., 3: 289–312.

[2] Gy, P.M. 1998. Sampling for Analytical Purposes. Wiley, Chichester, UK.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment