Techno Blender
Digitally Yours.

A Few Properties of Random Correlation Matrices and Their Nearest Correlation Matrices | by Rohan Kotwani | Jan, 2023

0 37


By Author

The purpose of this article is to evaluate the performance of two algorithms for finding the nearest positive semi-definite (PSD) correlation matrix. The first algorithm is a classical method, while the second is a brute force approach. This is relevant for data with high dimensions or missing values, as these types of data can often produce non-PSD matrices. The nearest PSD correlation matrix is useful for understanding the correlation function geometrically, real-world processes, and for analyzing the distribution of correlations within the solution space. It can also be used to improve estimates of correlations and in statistical models that rely on the correlation matrix. The brute force approach may provide insight into the continuity of the solution space and how small changes in individual correlations can affect the validity of the transformed correlation matrix.

The notebook for generating the data and plots below can be found here.

Equicorrelation Matrices

A correlation matrix that has identical non-diagonal elements is called an equicorrelation matrix. In this paper, section 2.4, it was found that valid PSD equicorrelation matrices have valid correlation values between the [-1/(D-1), 1), where D is the number of dimensions in the correlation matrix.

A = np.ones((50,50))

indices = np.tril_indices(A.shape[1],k=-1)

for v in np.linspace(-1/(A.shape[1]-1), 0.99, 10):
A[indices] = v
A.T[indices] = v
print(is_psd(A))

The loop above results in all equicorrelation matrices being positive semi-definite. For non-equicorrelation matrices, I also found some empirical evidence that seems to suggest that all randomly generated correlation within the interval [-1/(D-1), 1/(D-1)] are also PSD.

A = np.ones((50,50))

indices = np.tril_indices(A.shape[1],k=-1)

for _ in range(100):
r = np.random.uniform(-1/(A.shape[1]-1),1/(A.shape[1]-1), size=len(indices[0]))
A[indices] = r
A.T[indices] = r
print(is_psd(A))

For 100 simulations, all randomly generated correlation matrices within this interval, resulted in PSD correlation matrices. This was a useful insight for setting the initial correlations for the brute-force search approach.

Generating Random Correlation Matrices

Below is a class that will generate multiple correlation matrics with a random uniform distribution from -1 to 1. There is also a helper function to help determine if the eignvalues greater than or equal to zero, or if the matrix is positive semi-definite. Matrices that are not PSD are saved into a list. Below, there another python expression that can be used to determine a PSD matrix is the following:

# Check to see if there are negative eigenvalues
np.all(np.linalg.eigvals(A)>0)
def is_psd(A):
if np.array_equal(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False

class randomCorrelation():

def __init__(self, n_distr, n_dims):

self.n_distr = n_distr
self.dims = n_dims
self.rM = []

def generate_correlations(self):

psd = 0
same_sign = 0

count=0
for i in range(self.n_distr):

half = np.random.uniform(-1,1,size=self.dims*(self.dims+1)//2 - self.dims)
V = np.diag(np.ones(self.dims)) # Random psuedo correlation matrix
indices = np.tril_indices(self.dims, k=-1)

V[indices] = half
V.T[indices] = half

psd+=is_psd(V) # Positive semi-definite condition

if ~is_psd(V):
self.rM.append(V)
count+=1
print('percent positive semi-definite', psd/self.n_distr)

The percentage of correlation matrices that are positive semi-definite, with this data generation schema, decreases with the number of dimensions:

2 dimensions, percent positive semi-definite 1.0
3 dimensions, percent positive semi-definite 0.6233
4 dimensions, percent positive semi-definite 0.18645
5 dimensions, percent positive semi-definite 0.022
6 dimensions, percent positive semi-definite 0.00115
7 dimensions, percent positive semi-definite 0.0
8 dimensions, percent positive semi-definite 0.0

Nearcorr — Nearest Correlation Algorithm — Python Package

Admittedly, I did not spend much time looking at the theory behind the classical algorithm. However, the algorithm was available in this python package. The algorithm was based off of this paper.

Brute Force Search Algorithm — KernelML

KernelML is a particle optimizer that uses parameter constraints and sampling methods to minimize a customizable loss function. The package uses a Cythonized backend and parallelizes operations across multiple cores with the Numba. KernelML is now available on the Anaconda cloud and PyPi (pip). Please see the KernelML extention on the documentation page.

The loss function between a non-PSD correlation matrix and a transformed correlation matrix is show below:

def loss_fcn(X,y,w,args):

dim = X.shape[1]
newV = X.copy()
indices = np.tril_indices(dim,k=-1)

newV[indices] = w.flatten()
newV.T[indices] = w.flatten()

# The error is the exponential of the average between negative eigenvalues
# Another component at a lower importance level compares the similarity between
# the original and transformed correlation matrices
eigenvals = np.linalg.eigvals(newV)
negative_eigenvals = np.abs(np.clip(eigenvals, -np.inf, 0))
err = np.mean(negative_eigenvals)
loss = np.exp(err+3) + np.mean(np.abs(X-newV)) - np.mean(X*newV)

return loss

The loss function zeros out the non-negative eigenvalues of the transformed correlation matrix. Then, an exponential of the average of this vector is taken to create an error metric. The reason why an exponential was added was to make it a priority w.r.t the other metrics, i.e., the similarity errors between original and transformed matrices. The mean squared error and cosine similarity metric are used to measure the similarity between the two matrices.

50 Dimensional Random Correlation Matrix

A 50 dimensional, non-PSD correlation was generated, with the class mentioned above, and were transformed with NearCorr and KernelML.

By Author

Both solutions appear to reduce the negative eigenvalues signficantly. However, upon close inspection of the eigenvalues, the NearCorr solution produced a few very small negative eigenvalues, with a complex component.

KernelML Eigenvalues

[3.26102730e+00, 3.16188595e+00, 2.85512917e+00, 2.78161964e+00,
2.70800130e+00, 2.52089846e+00, 2.43190820e+00, 2.45786875e+00,
2.15509048e+00, 2.09047656e+00, 1.89868314e+00, 1.81315327e+00,
1.76542707e+00, 1.73618212e+00, 1.61980153e+00, 1.49914355e+00,
1.44361494e+00, 1.36868364e+00, 1.23239339e+00, 1.14406333e+00,
1.04860804e+00, 9.32217250e-01, 8.39964513e-01, 7.59756916e-01,
6.45826073e-01, 6.16302503e-01, 5.53247251e-01, 5.09136266e-01,
4.50253289e-01, 4.12152428e-01, 3.00354113e-01, 2.00972127e-01,
1.66795745e-01, 1.48190057e-01, 1.20280039e-01, 9.69555512e-02,
6.96143017e-02, 4.76321951e-02, 2.87903621e-02, 2.59533245e-02,
2.04674036e-02, 1.69942379e-02, 1.48020386e-02, 1.60456636e-04,
6.15699833e-04, 1.94234701e-03, 3.10333646e-03, 6.37932838e-03,
8.38614406e-03, 9.09486976e-03]

NearCorr Eigenvalues

[ 5.49745838e+00+0.00000000e+00j,  5.06032122e+00+0.00000000e+00j,
4.74855867e+00+0.00000000e+00j, 4.41371180e+00+0.00000000e+00j,
4.20405340e+00+0.00000000e+00j, 3.67647162e+00+0.00000000e+00j,
3.51553607e+00+0.00000000e+00j, 1.78206741e-01+0.00000000e+00j,
3.04522546e-01+0.00000000e+00j, 3.01128208e+00+0.00000000e+00j,
7.17349636e-01+0.00000000e+00j, 2.65682984e+00+0.00000000e+00j,
2.43941612e+00+0.00000000e+00j, 2.22224084e+00+0.00000000e+00j,
1.92005432e+00+0.00000000e+00j, 1.07636508e+00+0.00000000e+00j,
1.35075497e+00+0.00000000e+00j, 1.44164929e+00+0.00000000e+00j,
1.56521738e+00+0.00000000e+00j, -5.48833868e-14+0.00000000e+00j,
-3.76350836e-14+0.00000000e+00j, -1.42503187e-14+0.00000000e+00j,
1.14473857e-14+0.00000000e+00j, -1.05789546e-14+0.00000000e+00j,
-9.05796853e-15+0.00000000e+00j, -6.70062402e-15+1.07696867e-16j,
-6.70062402e-15-1.07696867e-16j, 7.26141572e-15+0.00000000e+00j,
6.28307335e-15+0.00000000e+00j, 6.04998893e-15+0.00000000e+00j,
-4.92053725e-15+0.00000000e+00j, -4.34710475e-15+0.00000000e+00j,
5.23439287e-15+0.00000000e+00j, 4.78990334e-15+0.00000000e+00j,
4.26552008e-15+0.00000000e+00j, 3.79361532e-15+0.00000000e+00j,
3.62074251e-15+0.00000000e+00j, 3.06192155e-15+0.00000000e+00j,
2.69403693e-15+0.00000000e+00j, -3.20115062e-15+0.00000000e+00j,
-2.64786225e-15+0.00000000e+00j, -2.25518297e-15+0.00000000e+00j,
1.85997094e-15+0.00000000e+00j, 1.23302222e-15+0.00000000e+00j,
4.34231227e-16+0.00000000e+00j, 1.17471239e-16+0.00000000e+00j,
-2.12024603e-16+0.00000000e+00j, -7.43680143e-16+0.00000000e+00j,
-1.08268187e-15+0.00000000e+00j, -1.27034684e-15+0.00000000e+00j]

Are there potential solutions around NearCorr’s solution? Is the squishing of eigenvalues in the KerneML solution neccesary to produce valid matrices?

By Author

The distribution of correlations is similar to the eigenvalue plot, i.e., the variance is smaller for the KernelML solution.

Similarity Between the Transformed Correlation Matrices

The plot below shows the orginal non-PSD correlation matrix, the NearCorr solution, and the KernelML solution. There are 1225 off diagonal elements in a 50 dimensional correlation matrix.

By Author

Upon closer inspection, there seems to be small patches of high and low values that appear in all plots. Two metrics are plotted below comparing the original and transformed matrices, i.e, the 1) correlations and 2) mean-squared-error between these correlation matrices.

By Author

The correlations and mean-absolute errors paint the same the same picture. While the NearCorr and KernelML solutions are similar, the NearCorr solution is closer to the original matrix, although it is not PSD.

Plotting the Determinant Across Potential Solutions

The determinant is a fundamental concept in mathematics that has many important applications. It is a scalar value, less than or equal to one, that can be calculated from the elements of a matrix.

For the correlation matrix, the determinant can be thought of geometrically. The eigenvalues represent the magnitude of the direction that a multi-dimensional sphere takens up. The eigenvectors rotate this sphere into an ellipsoid, and this rotation also reduces the total volume. For example, an identity matrix has a determinant of one because the eigenvalues and eigenvectors are also indentity matrices.

Visualization of the Determinant of an Identity Matrix -By Author
Visualization of the Determinant of a Rotated Identity Matrix — By Author

A histogram of log-determinants, shown below, were plotted for PSD correlation matrices, generated from KernelML. Note: the KernelML solution samples PSD correlation matrices while trying to improve the similarity between the generated and original matrix. This means that the generated solutions are somewhat correlated and not from a true random distribution.

By Author

The log-determinants and loss function were plotted over each KernelML update. It can be seen that the determinants decreases as the loss decreases, but the relationship is not exactly linear.

By Author
By Author

It is possible that running the algorithm for a longer period of time would eventually produce a distribution of sampled correlation around a local-minimum. The plot below shows how the correlations changed over the updates.

By Author

The plot below shows the distribution of the correlation parameters with the highest variation across update.

By Author

The solution space seems to be somewhat discrete in that there are gaps between what the correlation parameters can be for a valid PSD matrix.

Conclusion

There are real-world use cases for finding the nearest correlation matrix. The statistical anaylsis of a brute-force search for nearest correlation matrices resulted in a few ideas and questions:

  • The brute force approach was able to produce a valid PSD matrix, and initialize the search space between [-1/(D-1), 1/(D-1)] seemed to make finding a solution easier.
  • Is the brute force algorithm finding a local minimum? So solutions with larger eigenvalues, correlations, and determinants exist?
  • The solution space for valid PSD matrices seems to be somewhat discrete.

These findings seem to suggest that a reinforcement learning solution might be possible. However, there would need to be a way to search for discretized solution because small updates to the correlation might result in valid PSD matrices.

LinkedIn: https://www.linkedin.com/in/rkotwani


By Author

The purpose of this article is to evaluate the performance of two algorithms for finding the nearest positive semi-definite (PSD) correlation matrix. The first algorithm is a classical method, while the second is a brute force approach. This is relevant for data with high dimensions or missing values, as these types of data can often produce non-PSD matrices. The nearest PSD correlation matrix is useful for understanding the correlation function geometrically, real-world processes, and for analyzing the distribution of correlations within the solution space. It can also be used to improve estimates of correlations and in statistical models that rely on the correlation matrix. The brute force approach may provide insight into the continuity of the solution space and how small changes in individual correlations can affect the validity of the transformed correlation matrix.

The notebook for generating the data and plots below can be found here.

Equicorrelation Matrices

A correlation matrix that has identical non-diagonal elements is called an equicorrelation matrix. In this paper, section 2.4, it was found that valid PSD equicorrelation matrices have valid correlation values between the [-1/(D-1), 1), where D is the number of dimensions in the correlation matrix.

A = np.ones((50,50))

indices = np.tril_indices(A.shape[1],k=-1)

for v in np.linspace(-1/(A.shape[1]-1), 0.99, 10):
A[indices] = v
A.T[indices] = v
print(is_psd(A))

The loop above results in all equicorrelation matrices being positive semi-definite. For non-equicorrelation matrices, I also found some empirical evidence that seems to suggest that all randomly generated correlation within the interval [-1/(D-1), 1/(D-1)] are also PSD.

A = np.ones((50,50))

indices = np.tril_indices(A.shape[1],k=-1)

for _ in range(100):
r = np.random.uniform(-1/(A.shape[1]-1),1/(A.shape[1]-1), size=len(indices[0]))
A[indices] = r
A.T[indices] = r
print(is_psd(A))

For 100 simulations, all randomly generated correlation matrices within this interval, resulted in PSD correlation matrices. This was a useful insight for setting the initial correlations for the brute-force search approach.

Generating Random Correlation Matrices

Below is a class that will generate multiple correlation matrics with a random uniform distribution from -1 to 1. There is also a helper function to help determine if the eignvalues greater than or equal to zero, or if the matrix is positive semi-definite. Matrices that are not PSD are saved into a list. Below, there another python expression that can be used to determine a PSD matrix is the following:

# Check to see if there are negative eigenvalues
np.all(np.linalg.eigvals(A)>0)
def is_psd(A):
if np.array_equal(A, A.T):
try:
np.linalg.cholesky(A)
return True
except np.linalg.LinAlgError:
return False
else:
return False

class randomCorrelation():

def __init__(self, n_distr, n_dims):

self.n_distr = n_distr
self.dims = n_dims
self.rM = []

def generate_correlations(self):

psd = 0
same_sign = 0

count=0
for i in range(self.n_distr):

half = np.random.uniform(-1,1,size=self.dims*(self.dims+1)//2 - self.dims)
V = np.diag(np.ones(self.dims)) # Random psuedo correlation matrix
indices = np.tril_indices(self.dims, k=-1)

V[indices] = half
V.T[indices] = half

psd+=is_psd(V) # Positive semi-definite condition

if ~is_psd(V):
self.rM.append(V)
count+=1
print('percent positive semi-definite', psd/self.n_distr)

The percentage of correlation matrices that are positive semi-definite, with this data generation schema, decreases with the number of dimensions:

2 dimensions, percent positive semi-definite 1.0
3 dimensions, percent positive semi-definite 0.6233
4 dimensions, percent positive semi-definite 0.18645
5 dimensions, percent positive semi-definite 0.022
6 dimensions, percent positive semi-definite 0.00115
7 dimensions, percent positive semi-definite 0.0
8 dimensions, percent positive semi-definite 0.0

Nearcorr — Nearest Correlation Algorithm — Python Package

Admittedly, I did not spend much time looking at the theory behind the classical algorithm. However, the algorithm was available in this python package. The algorithm was based off of this paper.

Brute Force Search Algorithm — KernelML

KernelML is a particle optimizer that uses parameter constraints and sampling methods to minimize a customizable loss function. The package uses a Cythonized backend and parallelizes operations across multiple cores with the Numba. KernelML is now available on the Anaconda cloud and PyPi (pip). Please see the KernelML extention on the documentation page.

The loss function between a non-PSD correlation matrix and a transformed correlation matrix is show below:

def loss_fcn(X,y,w,args):

dim = X.shape[1]
newV = X.copy()
indices = np.tril_indices(dim,k=-1)

newV[indices] = w.flatten()
newV.T[indices] = w.flatten()

# The error is the exponential of the average between negative eigenvalues
# Another component at a lower importance level compares the similarity between
# the original and transformed correlation matrices
eigenvals = np.linalg.eigvals(newV)
negative_eigenvals = np.abs(np.clip(eigenvals, -np.inf, 0))
err = np.mean(negative_eigenvals)
loss = np.exp(err+3) + np.mean(np.abs(X-newV)) - np.mean(X*newV)

return loss

The loss function zeros out the non-negative eigenvalues of the transformed correlation matrix. Then, an exponential of the average of this vector is taken to create an error metric. The reason why an exponential was added was to make it a priority w.r.t the other metrics, i.e., the similarity errors between original and transformed matrices. The mean squared error and cosine similarity metric are used to measure the similarity between the two matrices.

50 Dimensional Random Correlation Matrix

A 50 dimensional, non-PSD correlation was generated, with the class mentioned above, and were transformed with NearCorr and KernelML.

By Author

Both solutions appear to reduce the negative eigenvalues signficantly. However, upon close inspection of the eigenvalues, the NearCorr solution produced a few very small negative eigenvalues, with a complex component.

KernelML Eigenvalues

[3.26102730e+00, 3.16188595e+00, 2.85512917e+00, 2.78161964e+00,
2.70800130e+00, 2.52089846e+00, 2.43190820e+00, 2.45786875e+00,
2.15509048e+00, 2.09047656e+00, 1.89868314e+00, 1.81315327e+00,
1.76542707e+00, 1.73618212e+00, 1.61980153e+00, 1.49914355e+00,
1.44361494e+00, 1.36868364e+00, 1.23239339e+00, 1.14406333e+00,
1.04860804e+00, 9.32217250e-01, 8.39964513e-01, 7.59756916e-01,
6.45826073e-01, 6.16302503e-01, 5.53247251e-01, 5.09136266e-01,
4.50253289e-01, 4.12152428e-01, 3.00354113e-01, 2.00972127e-01,
1.66795745e-01, 1.48190057e-01, 1.20280039e-01, 9.69555512e-02,
6.96143017e-02, 4.76321951e-02, 2.87903621e-02, 2.59533245e-02,
2.04674036e-02, 1.69942379e-02, 1.48020386e-02, 1.60456636e-04,
6.15699833e-04, 1.94234701e-03, 3.10333646e-03, 6.37932838e-03,
8.38614406e-03, 9.09486976e-03]

NearCorr Eigenvalues

[ 5.49745838e+00+0.00000000e+00j,  5.06032122e+00+0.00000000e+00j,
4.74855867e+00+0.00000000e+00j, 4.41371180e+00+0.00000000e+00j,
4.20405340e+00+0.00000000e+00j, 3.67647162e+00+0.00000000e+00j,
3.51553607e+00+0.00000000e+00j, 1.78206741e-01+0.00000000e+00j,
3.04522546e-01+0.00000000e+00j, 3.01128208e+00+0.00000000e+00j,
7.17349636e-01+0.00000000e+00j, 2.65682984e+00+0.00000000e+00j,
2.43941612e+00+0.00000000e+00j, 2.22224084e+00+0.00000000e+00j,
1.92005432e+00+0.00000000e+00j, 1.07636508e+00+0.00000000e+00j,
1.35075497e+00+0.00000000e+00j, 1.44164929e+00+0.00000000e+00j,
1.56521738e+00+0.00000000e+00j, -5.48833868e-14+0.00000000e+00j,
-3.76350836e-14+0.00000000e+00j, -1.42503187e-14+0.00000000e+00j,
1.14473857e-14+0.00000000e+00j, -1.05789546e-14+0.00000000e+00j,
-9.05796853e-15+0.00000000e+00j, -6.70062402e-15+1.07696867e-16j,
-6.70062402e-15-1.07696867e-16j, 7.26141572e-15+0.00000000e+00j,
6.28307335e-15+0.00000000e+00j, 6.04998893e-15+0.00000000e+00j,
-4.92053725e-15+0.00000000e+00j, -4.34710475e-15+0.00000000e+00j,
5.23439287e-15+0.00000000e+00j, 4.78990334e-15+0.00000000e+00j,
4.26552008e-15+0.00000000e+00j, 3.79361532e-15+0.00000000e+00j,
3.62074251e-15+0.00000000e+00j, 3.06192155e-15+0.00000000e+00j,
2.69403693e-15+0.00000000e+00j, -3.20115062e-15+0.00000000e+00j,
-2.64786225e-15+0.00000000e+00j, -2.25518297e-15+0.00000000e+00j,
1.85997094e-15+0.00000000e+00j, 1.23302222e-15+0.00000000e+00j,
4.34231227e-16+0.00000000e+00j, 1.17471239e-16+0.00000000e+00j,
-2.12024603e-16+0.00000000e+00j, -7.43680143e-16+0.00000000e+00j,
-1.08268187e-15+0.00000000e+00j, -1.27034684e-15+0.00000000e+00j]

Are there potential solutions around NearCorr’s solution? Is the squishing of eigenvalues in the KerneML solution neccesary to produce valid matrices?

By Author

The distribution of correlations is similar to the eigenvalue plot, i.e., the variance is smaller for the KernelML solution.

Similarity Between the Transformed Correlation Matrices

The plot below shows the orginal non-PSD correlation matrix, the NearCorr solution, and the KernelML solution. There are 1225 off diagonal elements in a 50 dimensional correlation matrix.

By Author

Upon closer inspection, there seems to be small patches of high and low values that appear in all plots. Two metrics are plotted below comparing the original and transformed matrices, i.e, the 1) correlations and 2) mean-squared-error between these correlation matrices.

By Author

The correlations and mean-absolute errors paint the same the same picture. While the NearCorr and KernelML solutions are similar, the NearCorr solution is closer to the original matrix, although it is not PSD.

Plotting the Determinant Across Potential Solutions

The determinant is a fundamental concept in mathematics that has many important applications. It is a scalar value, less than or equal to one, that can be calculated from the elements of a matrix.

For the correlation matrix, the determinant can be thought of geometrically. The eigenvalues represent the magnitude of the direction that a multi-dimensional sphere takens up. The eigenvectors rotate this sphere into an ellipsoid, and this rotation also reduces the total volume. For example, an identity matrix has a determinant of one because the eigenvalues and eigenvectors are also indentity matrices.

Visualization of the Determinant of an Identity Matrix -By Author
Visualization of the Determinant of a Rotated Identity Matrix — By Author

A histogram of log-determinants, shown below, were plotted for PSD correlation matrices, generated from KernelML. Note: the KernelML solution samples PSD correlation matrices while trying to improve the similarity between the generated and original matrix. This means that the generated solutions are somewhat correlated and not from a true random distribution.

By Author

The log-determinants and loss function were plotted over each KernelML update. It can be seen that the determinants decreases as the loss decreases, but the relationship is not exactly linear.

By Author
By Author

It is possible that running the algorithm for a longer period of time would eventually produce a distribution of sampled correlation around a local-minimum. The plot below shows how the correlations changed over the updates.

By Author

The plot below shows the distribution of the correlation parameters with the highest variation across update.

By Author

The solution space seems to be somewhat discrete in that there are gaps between what the correlation parameters can be for a valid PSD matrix.

Conclusion

There are real-world use cases for finding the nearest correlation matrix. The statistical anaylsis of a brute-force search for nearest correlation matrices resulted in a few ideas and questions:

  • The brute force approach was able to produce a valid PSD matrix, and initialize the search space between [-1/(D-1), 1/(D-1)] seemed to make finding a solution easier.
  • Is the brute force algorithm finding a local minimum? So solutions with larger eigenvalues, correlations, and determinants exist?
  • The solution space for valid PSD matrices seems to be somewhat discrete.

These findings seem to suggest that a reinforcement learning solution might be possible. However, there would need to be a way to search for discretized solution because small updates to the correlation might result in valid PSD matrices.

LinkedIn: https://www.linkedin.com/in/rkotwani

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment