---
title: "Using colourvision"
author: "Felipe M. Gawryszewski"
date: "`r Sys.Date()`"
output:
html_vignette:
toc: true
toc_depth: 3
number_sections: true
fig_caption: true
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Using colourvision}
%\VignetteEncoding{UTF-8}
bibliography: MSref.bib
---
```{r echo=FALSE, error=FALSE, message=FALSE, warning=FALSE}
library(colourvision)
```
# Introduction
This is a practical user guide to `colourvision`. Colour vision models allow appraisals of colour patches independently of the human vision. More detailed explanation on colour vision models is provided elsewhere [@Kelber:2003p1255; @ENDLER:2005p538; @Osorio:2008ep; @Kemp:2015jf; @Renoult:2017dv; @Gawryszewski:gt].
Package `colourvision` has functions to the three most commonly used models by ecologists, behavioural ecologists, and evolutionary biologists [@Chittka:1992p186; @Vorobyev:1998p1813; @Vorobyev:1998p335; @ENDLER:2005p538], and a generic function to build alternative models based on the same set of general assumptions. These models have been extended to accept any number of photoreceptor types, i.e., the same model may be applied to a dichromatic mammal and a tentatively pentachromatic dipterean. Modelling functions provide a comprehensive output, which may be visualised into publication-ready colour space plots.
# Data handling
This section shows functions applied to colour data before model calculations.
## `spec.denoise()`
Applies a `smooth.spline` for a data frame containing spectrometric data. Useful when raw data from spectrophotometer output are noisy.
```{r echo=FALSE}
spider<-read.table("615ad.ProcSpec.transmission", nrows=3000, skip=100)
```
```{r}
spider.smooth<-spec.denoise(spider)
```
```{r echo=FALSE, fig.cap = "Figure 1. Effect of `spec.denoise( )` applied to a reflectance curve.", fig.align="center", fig.width=8, fig.height=4, out.width='600px'}
par(mfrow=c(1,2), mar=c(5, 4, 4, 2) + 0.1)
plot(spider, ylim=c(0,50), xlim=c(300,700), type="l",
ylab="Reflectance (%)", xlab="Wavelength (nm)",
main="before")
plot(spider.smooth, ylim=c(0,50), xlim=c(300,700), type="l",
ylab="Reflectance (%)", xlab="Wavelength (nm)", main="after")
```
## `photor()`
Photoreceptor sensitivity curves are seldom available, but they can be estimated using the wavelength of maximum sensitivity [$\lambda_{max}$; @Govardovskii:2000tf]. `photor` generates photoreceptor sensitivity spectra based on $\lambda_{max}$ values:
```{r}
human<-photor(lambda.max = c(420,530,560), lambda = seq(400, 700, 1))
```
```{r echo=FALSE, fig.cap = "Figure 2. Estimated photoreceptor sensitivity curves based on the wavelength of maximum sensitivity using `photor( )` function.", fig.align="center", fig.width=4, fig.height=4, out.width='300px'}
plot(human[,2]~human[,1], type="l", ylim=c(0,1),
col="blue", lwd=2,
ann = FALSE,
mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Sensitivity", line = 2.5)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5)
lines(human[,3]~human[,1], col="green", lwd=2)
lines(human[,4]~human[,1], col="red", lwd=2)
```
## `logistic()`
Creates a sigmoid reflectance curve. Useful for simulations using colour vision models [see for instance @Gawryszewski:gt].
```{r}
R<-logistic(x0=500, L=80, k=0.04)
```
```{r echo=FALSE, fig.cap = "Figure 3. Simulated reflectance spectrum using the `logistic( )` function.", fig.align="center", fig.width=5, fig.height=4, out.width='400px'}
plot(R, ylim=c(0,100),xlim=c(300,700), xlab="Wavelength (nm)", ylab="Reflectance (%)", type="l")
```
## `energytoflux()`
Photoreceptors respond to photon numbers, not photon energy [@Endler:1990p1253]. `energytoflux` converts Irradiance datum from energy units ($uW \times cm^{-2} \times nm^{-1}$) to quantum flux units ($umol \times m^{-2} \times s^{-1}$).
```{r echo=FALSE}
D65energy<-read.csv("D65energy.csv")
```
```{r echo=TRUE}
D65photon<-energytoflux(D65energy)
```
```{r echo=FALSE, fig.cap = "Figure 4. CIE D65 in photon energy (left) and quantum flux units (right)", fig.align="center", fig.width=8, fig.height=4, out.width='600px'}
par(mfrow=c(1,2),mar=c(5, 5, 4, 2) + 0.1)
plot(D65energy, xlim=c(300,700), type="l",
ylab=expression("Spectral energy"~(uW~cm^-2*nm^-1)), xlab="Wavelength (nm)",
main="CIE D65")
plot(D65photon, xlim=c(300,700), type="l",
ylab=expression("Photon flux"~(umol~m^-2*s^-1)), xlab="Wavelength (nm)", main="CIE D65")
```
# Colour Vision Models
## The Basics
This section serves as a brief introduction to colour vision models, and to introduce internal package functions used to calculate colour vision models. For model specific functions (`CTTKmodel`, `EMmodel`, `RNLmodel`, `RNLthres`, and `GENmodel`), please refer to further sections in this manual.
Colour vision models require a minimum of four parameters for calculations: (1) photoreceptor sensitivity curves, (2) background reflectance spectrum, (3) illuminant spectrum, and (4) the stimulus reflectance spectrum (stimulus). Receptor noise limited models also require photoreceptor noise for each photoreceptor type.
Firstly, one needs to estimate photon catches of each photoreceptor type in the retina, which is a function of the stimulus reflectance, photoreceptor sensitivity, and the illuminant spectrum:
$$Q_i = \int_{300}^{700} R(\lambda)I(\lambda)C_i(\lambda) d\lambda$$
where $R(\lambda)$ denote the stimulus reflectance, $I(\lambda)$ the illuminant spectrum, and $C(\lambda)$ the photoreceptor sensitivity curves. Note that the illuminant spectrum has to be in quantum flux units, not energy units, because photoreceptors respond to photon numbers, not photon energy.
In `colourvision` quantum catches are represented by function `Q`. Here, quantum catches of a given stimulus (`R`) are estimated by each photoreceptor type found in *Apis mellifera* [`bee`; @Peitsch:1992p214] under the CIE D65 standard illuminant (`D65`):
```{r Qcatches}
R<-logistic(x=seq(300,700,1), x0=500, L=80, k=0.04)
data("bee")
data("D65")
Qcatch1<-Q(R=R,I=D65,C=bee[c(1,2)],interpolate=TRUE,nm=seq(300,700,1))
Qcatch2<-Q(R=R,I=D65,C=bee[c(1,3)],interpolate=TRUE,nm=seq(300,700,1))
Qcatch3<-Q(R=R,I=D65,C=bee[c(1,4)],interpolate=TRUE,nm=seq(300,700,1))
```
In general, colour vision models assume that photoreceptors are adapted to the background. This is achieved by calculating quantum catch of each photoreceptor type in relation to the quantum catches based on the background reflectance (also known as the von Kries transformation).
$$Qb_i = \int_{300}^{700} Rb(\lambda)I(\lambda)C_i(\lambda) d\lambda$$
$$q_i = \frac{Q_i}{Qb_i}$$
where $Rb$ is the background reflectance, $Q_i$ is the quantum catch from the stimulus reflectance, and $Qb_i$ is the quantum catch from the background reflectance.
Relative quantum catches are calculated using function `Qr`. Here photoreceptors are assumed to be adapted to a reflectance background based on samples collected in the Brazilian savanna [@Gawryszewski:2012jv]:
```{r Qrcatches}
data("Rb")
Qr1<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,2)],interpolate=TRUE,nm=seq(300,700,1))
Qr2<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,3)],interpolate=TRUE,nm=seq(300,700,1))
Qr3<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,4)],interpolate=TRUE,nm=seq(300,700,1))
```
The relationship between photoreceptor input and output are assumed to be non-linear. Each colour vision model uses a different transformation function (e.g., `log`, `x/(x+1)`), but with the same general result:
```{r echo=FALSE}
X<-seq(1,10,by=0.1)
logY<-log(X)
hyperY<-X/(X+1)
```
```{r echo=FALSE, fig.cap="Figure 5. Relationship between photoreceptor input and output typically used in colour vision models.", fig.align="center", fig.width=5, fig.height=4, out.width='400px'}
#par(cex.axis=0.9, cex.lab=0.9)
plot(logY~X, xlab="q", ylab="E", type="l", lwd=2, ylim=c(0,max(logY)))
lines(hyperY~X,col="red", lwd=2)
legend(x=1,y=max(logY),
legend=c("log(q)","q/(q+1)"),
bty="n", col=c("black","red"),
lty=1, xjust=0, cex=0.9)
```
For instance,
@Chittka:1992p186 assumes an asymptotic curve with limit = 1:
$$E_i = \frac{q_i}{q_i+1}$$
Photoreceptor outputs are then projected as equidistant vectors into a colour space. Again, each model will present differently arranged vectors, however, this arrangement is arbitrary and has no effect on model predictions. The length of the resultant vector represents the chromaticity distance of the stimulus in relation to the background, and vector coordinates represent the stimulus locus in the animal colour space (colour locus).
For instance, `colour_space` generates a general colour space based on any number of photoreceptor types, and calculates colour locus coordinates for a given photoreceptor output:
```{r}
colour_space(n=3, type="length", length=1, edge=NA, q=c(Qr1,Qr2,Qr3))
```
## Chittka (1992) Colour Hexagon
@Chittka:1992p186 developed a colour vision model based on trichromatic hymenopteran vision. This model has later been extended to tetrachromatic avian vision [@Thery:2002p163]. In `colourvision` this model has been further extended to accept any number of photoreceptor types ($n\geq2$).
Photoreceptor outputs ($E_i$) are calculated by:
$${E_i = \frac{q_i}{q_i+1}}$$
Then, for trichromatic vision, coordinates in the colour space are found by [@Chittka:1992p186]:
$${X_1 = \frac{\sqrt{3}}{2}(E_3-E_1)}$$
$${X_2 = E_2-\frac{1}{2}(E_1+E_3)}$$
For tetrachromatic vision [@Thery:2002p163]:
$${X_1 = \frac{\sqrt{3}\sqrt{2}}{3}(E_3-E_4)}$$
$${X_2 = E_1-\frac{1}{3}(E_2+E_3+E_4)}$$
$${X_3 = \frac{2\sqrt{2}}{3}(\frac{1}{2}(E_3+E_4)-E_2)}$$
Then, for a pentachromatic animal [@Gawryszewski:gt]:
$${X_1 = \frac{5}{2\sqrt{2}\sqrt{5}}(E_2-E_1)}$$
$${X_2 = \frac{5\sqrt{2}}{2\sqrt{3}\sqrt{5}}(E_3-\frac{E_1+E_2}{2})}$$
$${X_3 = \frac{5\sqrt{3}}{4\sqrt{5}}(E_4-\frac{E_1+E_2+E_3}{3})}$$
$${X_4 = E_5-\frac{E1+E2+E3+E4}{4}}$$
### `CTTKmodel()`
@Chittka:1992p186 model is represented by function `CTTKmodel`. This functions needs (1) photoreceptor sensitivities curves, (2) background reflectance spectrum, (3) illuminant spectrum, and (4) stimulus reflectance spectra:
A worked example:
1. Load data files:
*Apis mellifera* photoreceptor sensitivity curves [@Peitsch:1992p214], Background reflectance and the illuminant spectrum:
```{r}
data("bee")
data("Rb")
data("D65")
```
```{r echo=FALSE, fig.cap = "Figure 6. *Apis mellifera* photoreceptor sensitivity curves [a; data from @Peitsch:1992p214], background reflectance spectrum [b; data from @Gawryszewski:2012jv], and CIE D65 standard illuminant (c).", fig.align="center", fig.width=15, fig.height=5, out.width='700px'}
par(mfrow=c(1,3))
plot(bee[,2]~bee[,1], type="l", ylim=c(0,100),
#ylab="Absorbance(%)",
#xlab="Wavelength(nm)",
col="violet", lwd=2, cex.axis=1, cex.lab=1,
ann = FALSE,
mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Sensitivity(%)", line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)
lines(bee[,3]~bee[,1], col="blue", lwd=2)
lines(bee[,4]~bee[,1], col="green", lwd=2)
text("a)", x=305,y=98, cex=1.5)
plot(Rb[,2]~Rb[,1], type="l", ylim=c(0,100),
# ylab="Reflectance(%)",
# xlab="Wavelength(nm)",
lwd=2, cex.axis=1, cex.lab=1,
ann = FALSE,
mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Reflectance(%)", line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)
text("b)", x=305,y=98, cex=1.5)
plot(D65[,2]~D65[,1], type="l", ylim=c(0,5), xlim=c(300,700),
#ylab="Photon flux (??mol/m2/s)",
#xlab="Wavelength(nm)",
lwd=2, cex.axis=1, cex.lab=1,
ann = FALSE,
mgp = c(3, 0.7, 0))
mtext(side = 2, text = expression("Photon flux"~(umol~m^-2*s^-1)), line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)
text("c)", x=305,y=4.9, cex=1.5)
```
2. Create simulated reflectance data:
```{r}
midpoint<-seq(from = 500, to = 600, 10)
W<-seq(300, 700, 1)
R<-data.frame(W)
for (i in 1:length(midpoint)) {
R[,i+1]<-logistic(x = seq(300, 700, 1), x0=midpoint[[i]], L = 70, k=0.04)[,2]+5
}
names(R)[2:ncol(R)]<-midpoint
```
```{r echo=FALSE, fig.cap = "Figure 7. Simulated reflectance spectra.", fig.align="center", fig.width=5, fig.height=4, out.width='400px'}
plot(R[,2]~R[,1], ylim=c(0,100),xlim=c(300,700), xlab="Wavelength (nm)", ylab="Reflectance (%)", type="n", cex=0.9)
colours<-rainbow(n=(ncol(R)-1))
for(i in 2:ncol(R)) {
lines(R[,i]~R[,1],col=colours[[ncol(R)-i+1]])
}
```
3. Run @Chittka:1992p186 model:
```{r}
CTTKmodel3<-CTTKmodel(R=R, I=D65, C=bee, Rb=Rb)
```
Model output provides the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS).
`CTTKmodel3`
```{r, echo=FALSE, results='asis'}
knitr::kable(head(CTTKmodel3), caption = "Table 1. `CTTKmodel()` output of a trichromatic animal, showing the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS)")
```
## Endler & Mielke (2005)
The original model is available for tetrachromatic animals only. In `colourvision`, the model was extended to any number of photoreceptors [@Gawryszewski:gt; see also @Pike:2012tv].
First, relative quantum catches are log-transformed:
$$f_i = \ln(q_i)$$
where $q_i$ is the relative quantum catch of each photoreceptor type. The model uses only relative values, so that photoreceptor outputs ($E$) are given by:
$$E_i = \frac{f_i}{f_1+f_2+f_3+...+f_n}$$
Then, for tetrachromatic vision colour locus coordinates are found by [@ENDLER:2005p538]:
$$X_1 = \sqrt{\frac{3}{2}}(1-2\frac{E2-E3-E1}{2})$$
$${X_2 = \frac{-1+3E_3+E_1}{2\sqrt{2}}}$$
$${X_3 = E_1-\frac{1}{4}}$$
Tetrachromatic chromaticity diagram (tetrahedron) in @ENDLER:2005p538 has a maximum photoreceptor vector of length = 0.75, which gives a tetrahedron with edge length = $\sqrt{\frac{3}{2}}$. The chromaticity coordinates for other colour spaces may preserve either the same vector length or the same edge length.
For instance, for dichromatic vision, coordinate (X1) in the colour space preserving the same vector length is found by:
$${X_1 = \frac{3}{4}(E_2-E_1)}$$
whereas if the edge length is preserved, $X_1$ is found by:
$$\frac{1}{2}\sqrt{\frac{3}{2}}(E_2-E_1)$$
### `EMmodel()`
Using the same data as in `CTTKmodel()` example:
```{r}
EMmodel3<-EMmodel(type="length",R=R,I=D65,Rb=Rb,C=bee)
```
Model output provides the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS).
```{r, echo=FALSE, results='asis'}
knitr::kable(head(EMmodel3), caption = "Table 2. `EMmodel()` output of a trichromatic animal, showing the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS)")
```
## Receptor Noise Limited Models (Vorobyev & Osorio 1998; Vorobyev et al. 1998)
Receptor Noise Limited Model assumes that chromatic discrimination is limited by noise at the photoreceptors [@Vorobyev:1998p1813; @Vorobyev:1998p335]. Model calculation follows similar steps as in @Chittka:1992p186 and @ENDLER:2005p538, but has an additional step, namely, calculation of noise at the resultant vector, based on noise of each photoreceptor type.
Photoreceptor noise is seldom measured directly. In lack of direct measurement, receptor noise (${e_i}$) can be estimated by the relative abundance of photoreceptor types in the retina, and a measurement of a single photoreceptor noise-to-signal ratio [@Vorobyev:1998p1813; @Vorobyev:1998p335]:
$${e_i=\frac{\nu}{\sqrt{\eta _i}}}$$
where ${\nu}$ is the noise-to-signal ratio of a single photoreceptor, and ${\eta}$ is the relative abundance of photoreceptor $i$ in the retina.
@Vorobyev:1998p1813 aimed to predict colour thresholds. Close to the threshold, the relationship between photoreceptor input and output is nearly linear, so that $E_i=q_i$. However, for comparison between two colours that are not perceptually near the threshold, one must use a non-linear relationship between photoreceptor input and output [@Vorobyev:1998p335]: $E_i=ln(q_i)$.
Then, $\Delta$S is calculated by [eq. A7 in @Vorobyev:1998p335]:
$$(\Delta{S})^2 = V \Delta\vec{p} \bullet (V R V^T)^{-1} V \Delta\vec{p}$$
where $V$ is a matrix of column vectors, $\bullet$ denotes the inner product, $T$ denotes the transpose, $\Delta\vec{p}$ is a vector which components represent differences between $E$-values, and $R$ is a covariance matrix of photoreceptor values. Photoreceptors values are not correlated, therefore $R$ is a diagonal matrix in which diagonal elements are the photoreceptor variances ($e_i^2$):
$$R = \left[\array{
e_1^2 & 0 & 0 &\\
0 & e_2^2 & 0 &\cdots &\\
0 & 0 & e_3^2&\\
& \vdots & & \ddots &}\right]$$
The receptor noise limited model was originally developed to calculate $\Delta$S between two reflectance curves directly, without finding colour locus coordinates [see eqs 3-5 in @Vorobyev:1998p1813]. Nonetheless, for visualisation purposes it is useful to project colour vision model results into chromaticity diagrams. This can be done by [@Gawryszewski:gt; see also @HempeldeIbarra:2001tf; and @Renoult:2017dv for alternative formulae]:
$$\vec{s} = \sqrt{(V R V^T)^{-1}} V \vec{p}$$
where $\vec{s}$ is a vector which components represent stimulus colour locus coordinates, $\vec{p}$ is a vector which components represent stimulus $E$-values, and other elements are the same as in the preceding formula.
### `RNLmodel()`
Using the same data as in `CTTKmodel()` example:
```{r}
RNLmodel3<-RNLmodel(model="log",photo=3,R1=R,Rb=Rb,I=D65,C=bee,
noise=TRUE,e=c(0.13,0.06,0.11))
```
Model above calculates $\Delta$S values based on noise measured at *Apis mellifera* photoreceptors (`e=c(0.13,0.06,0.11)`). Alternatively, noise might have been calculated based on photoreceptor relative abundances:
```{r}
RNLmodel3.1<-RNLmodel(model="log", photo=3, R1=R, Rb=Rb, I=D65, C=bee,
noise=FALSE, n=c(0.5,1,0.5), v=0.1)
```
Furthermore, users might add another reflectance stimulus (`R2`) to be compared against the first stimulus (`R1`):
```{r}
R2<-logistic(x = seq(300, 700, 1), x0=512, L = 70, k=0.01)
RNLmodel3.2<-RNLmodel(model="log", photo=3, R1=R, R2=R2, Rb=Rb, I=D65, C=bee, noise=FALSE, n=c(1,2,1), v=0.1)
```
Model output provides photoreceptor noise (e), the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance (deltaS) of the first stimulus in relation to the second stimulus (against the background when `R2=Rb`).
```{r, echo=TRUE}
head(RNLmodel3.2)
```
### `RNLachrom()`
For achromatic contrast calculation (single photoreceptor), use the `RNLachrom` function:
```{r}
RNLmodel.achrom<-RNLachrom(R1=R,Rb=Rb,I=D65,C=bee[,c(1,4)],e=0.16)
```
The Weber achromatic contrast for a single photoreceptor is calculated by:
$$\Delta S = |\frac{\ln(Qr_1)-\ln(Qr_2)}{e}|$$
where $Qr_1$ and $Qr_2$ are the relative photoreceptor quantum catches from stimulus 1 (`R1`) and stimulus 2 (`R2`).
Model output provides photoreceptor noise (e), the relative quantum catches (Qr), photoreceptor outputs (E), and the achromatic distance (deltaS) of the first stimulus in relation to the second stimulus (against the background when `R2=Rb`).
```{r, echo=TRUE}
head(RNLmodel.achrom)
```
### `RNLthres()`
@Vorobyev:1998p1813 aimed to predict discrimination threshold of monochromatic stimuli. By definition, thresholds are found when $\Delta{S} = 1$, therefore [@Vorobyev:1998p1813]:
$$(1)^2 = V \Delta\vec{p} \bullet (V R V^T)^{-1} V \Delta\vec{p}$$
`RNLthres()` calculates thresholds of monochromatic light for a given background, illuminant, photoreceptor sensitivities and photoreceptor noise.
Worked example:
1. Bee photoreceptors normalized 1.
```{r}
C<-bee
C[,2]<-C[,2]/max(C[,2])
C[,3]<-C[,3]/max(C[,3])
C[,4]<-C[,4]/max(C[,4])
```
2. Grey background:
```{r}
Rb.grey <- data.frame(300:700, rep(0.1, length(300:700)))
```
3. Model calculation:
```{r}
thres<-RNLthres(photo=3, Rb=Rb, I=D65, C=C,
noise=TRUE, e = c(0.13, 0.06, 0.12))
```
The output is a `data.frame` with threshold values (`T`) and log of the sensitivity values (`S`), per wavelength (`nm`). Sensitivity is simply the inverse of threshold ($S = \frac{1}{T}$)
```{r echo=FALSE, results='asis'}
knitr::kable(head(thres), caption = "Table 3. `RNLthres()` output, showing wavelength (`nm`), threshold values (`T`) and log of the sensitivity values (`S`).")
```
## Generic model
A generic function (`GENmodel`) is provided that allows calculation of alternative models based on the same assumptions of other models. Note, however, that colour locus coordinates may differ because positions of vectors used in `GENmodel` are not necessarily the same as in each model specific formula. Also, caution should be taken because models generated by `GENmodel` are not supported by experimental data.
### `GENmodel()`
Worked examples:
In this case, `GENmodel` is applying the same transformation (`func=function(x){x/(1+x)}`), and the colour space has the same maximum vector length (`length=1`) as in `CTTKmodel`:
```{r}
CTTKmodel3<-CTTKmodel(photo=3,R=R,Rb=Rb,I=D65,C=bee)
ANY.CTTKmodel3any<-GENmodel(photo=3, type="length", length=1,R=R,Rb=Rb,I=D65,C=bee, vonKries = TRUE, func=function(x){x/(1+x)}, unity=FALSE,recep.noise=FALSE)
```
Note, however, that although Qr, E and deltaS values are exactly the same, colour locus coordinates (X) differ between models (Tables 1 and Table 4). This happens because `GENmodel` uses a different arrangement of vectors than @Chittka:1992p186. The arrangement of photoreceptors output vectors is arbitrary, has no biological meaning, and no effect on model predictions.
`ANY.CTTKmodel3any`
```{r echo=FALSE, results='asis'}
knitr::kable(head(ANY.CTTKmodel3any), caption = "Table 4. `GENmodel( )` output, with model parameters based on the same set of assumptions as in Chittka (1992)")
```
```{r echo=FALSE, fig.cap="Figure 8. Photoreceptors outputs represented as vectors in a colour space. Each colour vision model uses differently arranged vectors. As long as pairwise angles between vectors are identical, vector arrangements have no biological significance.", fig.align="center", fig.width=8, fig.height=4, out.width='600px'}
q<-c(0.8,0.8,0.1)
x<-(sqrt(3)/2)*(q[[3]]-q[[1]])
y<-q[[2]]-0.5*(q[[1]]+q[[3]])
par(mfrow=c(1,2), cex=0.8)
plot(y~x, ylim=c(-1.3,1.3), xlim=c(-1.3,1.3), asp=1, xlab="X1",ylab="X2",
main="CTTKmodel")
arrows(x0=0,y0=0,x1=0,y1=1, length = 0.10)
arrows(x0=0,y0=0,x1=-0.86660254,y1=-0.5, length = 0.10)
arrows(x0=0,y0=0,x1=0.86660254,y1=-0.5, length = 0.10)
graphics::text(x=0,y=1,labels="E1",pos=3)
graphics::text(x=-0.86660254,y=-0.5,labels="E2", pos=1)
graphics::text(x=0.86660254,y=-0.5, labels="E3", pos=4)
cp<-colour_space(n=3, type="length", length=1, q=q)
plot(cp$coordinates[[2]]~cp$coordinates[[1]],
ylim=c(-1.3,1.3), xlim=c(-1.3,1.3), asp=1, xlab="X1",ylab="X2",
main="GENmodel")
v1<-cp$vector_matrix[,1]
v2<-cp$vector_matrix[,2]
v3<-cp$vector_matrix[,3]
arrows(x0=0,y0=0,x1=v1[[1]],y1=v1[[2]], length = 0.10)
arrows(x0=0,y0=0,x1=v2[[1]],y1=v2[[2]], length = 0.10)
arrows(x0=0,y0=0,x1=v3[[1]],y1=v3[[2]], length = 0.10)
text(x=v1[[1]],y=v1[[2]],labels="E1",pos=1)
text(x=v2[[1]],y=v2[[2]],labels="E2",pos=1)
text(x=v3[[1]],y=v3[[2]],labels="E3",pos=3)
```
Alternatively, users may choose to change some aspect of models. In the example below, a model based on receptor noise is calculated, but with a different photoreceptor input-output transformation:
```{r message=FALSE, warning=FALSE}
RNLmodel3<-RNLmodel(model="log",photo=3,R1=R,Rb=Rb,I=D65,C=bee,
noise=TRUE,e=c(0.13,0.06,0.11))
ANY.RNLmodel3any<-GENmodel(photo=3, type="length", length=1,R=R,Rb=Rb,I=D65,C=bee, vonKries = TRUE, func=function(x){x/(1+x)}, unity=FALSE,recep.noise=TRUE, noise.given=TRUE,e=c(0.13,0.06,0.11))
```
Note that in this case several parameters differ between models:
`head(RNLmodel3[,c("E1_R1","E2_R1","E3_R1","X1_R1","X2_R1","deltaS")])`
```{r echo=FALSE, results='asis'}
knitr::kable(head(RNLmodel3[,c("E1_R1","E2_R1","E3_R1","X1_R1","X2_R1","deltaS")]), caption = "Table 5. `RNLmodel( )` results, showing photoreceptor outputs (Ei_R1), colour locus coordinates (Xi_R1), and the chromaticity distance to the background (deltaS).")
```
`head(ANY.RNLmodel3any[,c("E1","E2","E3","X1","X2","deltaS")])`
```{r echo=FALSE, results='asis'}
knitr::kable(head(ANY.RNLmodel3any[,c("E1","E2","E3","X1","X2","deltaS")]), caption = "Table 6. `GENmodel( )` based on receptor noise but with a different transformation between photoreceptor input and output. Model results showing photoreceptor outputs (Ei_R1), colour locus coordinates (Xi_R1), and the chromaticity distance to the background (deltaS).")
```
## Chromaticity distances ($\Delta$S)
Chromaticity distances ($\Delta$S) are the Euclidean distances between points in the animal colour space. It is frequently assumed that there is a positive and linear relationship between $\Delta$S values and the probability of discrimination between two colours [although this is not necessarily the case, see @Garcia:2017hl].
In `CTTKmodel`, `EMmodel`, and `GENmodel` model outputs, `deltaS` values represent the distance between the stimulus and the background. In `RNLmodel` output, `deltaS` represents the distance between `R1` and `R2`, or between `R1` and the background when `R2=Rb`.
However, one may want to compute all pairwise chromaticity distance between all stimuli. This is done by the `deltaS` function.
### `deltaS`
`deltaS` function will calculate a matrix with all possible pairwise comparison between stimulus reflectance spectra.
```{r}
dS<-deltaS(CTTKmodel3)
dS
```
It may be useful to visualise `deltaS` output graphically using the 'corrplot' package:
```{r corrplot, echo=TRUE, fig.cap="Figure 9. Graphical representation of chromaticity distances using `corrplot` package. Size and colour of circles denote chromaticity distances between each reflectance spectra.", fig.align="center", fig.width=4, fig.height=4, out.width='400px'}
library(corrplot)
corrplot(corr=dS, is.corr=FALSE)
```
## Plotting models
Models outputs can be easily plotted using the `plot` function, for dichromatic and trichromatic animals, or `plot3d.colourvision` (requires package `rgl`) for tetrachromatic animals. In addition, `radarplot` plots `Qr` and `E` values into a radar plot. For threshold data, `plot(model)` plots sensitivity values (ln-transformed) per wavelength.
### `plot(model)`
For dichromatic and trichromatic animals. Plots model specific chromaticity diagrams. For instance:
@Chittka:1992p128 Hexagon for trichromatic animals:
```{r echo=TRUE, fig.cap="Figure 10. @Chittka:1992p128 colour hexagon representing the colour space boundaries of a thrichroamtic animal (*Apis mellifera*). Circles denotes colour locus of simulated reflectance spectra (Figure 7).", fig.align="center", fig.width=3, fig.height=3, out.width='300px'}
par(mar=c(1,1,1,1))
colours<-rainbow(n=(ncol(R)-1))
plot(CTTKmodel3, cex=0.5, col=colours)
```
@ENDLER:2005p538 adapted to trichromatic animals:
```{r echo=TRUE, fig.cap="Figure 11. @ENDLER:2005p538 colour triangle representing the colour space boundaries of a trichromatic animal (*Apis mellifera*). Circles denotes colour locus of simulated reflectance spectra (Figure 7).", fig.align="center", fig.width=3, fig.height=3, out.width='300px'}
par(mar=c(1,1,1,1))
plot(EMmodel3, cex=0.8, col=colours)
```
Colour spaces of receptor noise limited models [@Vorobyev:1998p1813; @Vorobyev:1998p335] do not have boundaries. Therefore, it is useful to plot results alongside vectors representing $E$-values:
```{r echo=TRUE}
par(mar=c(5, 4, 4, 2) + 0.1)
```
```{r echo=TRUE, fig.cap="Figure 12. Receptor Noise Limited Model colour space of a trichromatic animal (*Apis mellifera*). Circles denote colour locus of simulated reflectance spectra (Figure 7). Vectors represent photoreceptor outputs.", fig.align="center", fig.width=4, fig.height=4, out.width='300px'}
plot(RNLmodel3, cex=0.8, pch=16, col=colours)
```
### `plot(model)` for threshold values
Colour thresholds based on receptor noise limited models [@Vorobyev:1998p1813]:
```{r echo=TRUE, fig.cap="Figure 13. Threshold spectral sensitivity of a trichromatic animal (*Apis mellifera*) based on receptor noise (`RNLthres( )` function).", fig.align="center", fig.width=4, fig.height=4, out.width='300px'}
plot(thres)
```
### `plot3d.colourvision(model)`
For tetrachromatic animals, `plot3d.colourvision` plots model specific 3D colour spaces. @Chittka:1992p186 model is represented by a hexagonal trapezohedron:
```{r}
library(rgl)
CTTKmodel4<-CTTKmodel(R=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb)
```
```{r echo=FALSE}
library(rgl)
par3d(windowRect=c(100,100,800,800))
```
```{r echo=TRUE}
library(rgl)
plot3d.colourvision(CTTKmodel4, s.col = colours, size=4)
```
```{r echo=FALSE}
view3d(theta = 30, phi = 10, zoom=0.8)
rgl.snapshot("CTTKmodel4.png")
close3d()
```
```{r echo=FALSE, dev="png", out.width='400px', fig.align="center", fig.cap = "Figure 14. @Chittka:1992p186 hexagonal trapezohedron representing the colour space boundaries of a tetrachromatic animal. Circles denotes colour locus of simulated reflectance spectra (Figure 7)."}
knitr::include_graphics('./CTTKmodel4.png', auto_pdf = FALSE)
```
```{r}
EMmodel4<-EMmodel(R=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb)
```
```{r echo=FALSE}
library(rgl)
par3d(windowRect=c(100,100,800,800))
```
```{r echo=TRUE}
plot3d.colourvision(EMmodel4, s.col = colours, size=4)
```
```{r echo=FALSE}
library(rgl)
view3d(theta = 0, phi = 30, zoom=0.8)
rgl.snapshot("EMmodel4.png")
close3d()
```
```{r echo=FALSE, dev="png", out.width='400px', fig.align="center", fig.cap = "Figure 15. @ENDLER:2005p538 tetrahedron representing the colour space boundaries of a tetrachromatic animal. Circles denotes colour locus of simulated reflectance spectra (Figure 7)."}
knitr::include_graphics('./EMmodel4.png', auto_pdf = FALSE)
```
```{r}
RNLmodel4<-RNLmodel(model="log", R1=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb, noise=TRUE, e=c(0.1,0.07,0.07,0.05))
```
```{r echo=FALSE}
library(rgl)
par3d(windowRect=c(100,100,800,800))
```
```{r echo=TRUE}
plot3d.colourvision(RNLmodel4, xlab="", ylab="", zlab="", col = colours, size=4)
```
```{r echo=FALSE}
view3d(theta = 30, phi = 10, zoom=1)
rgl.snapshot("RNLmodel4.png")
close3d()
```
```{r echo=FALSE, dev="png", out.width='400px', fig.align="center", fig.cap = "Figure 16. Receptor Noise Limited Model colour space of a tetrachromatic animal. Circles denote colour locus of simulated reflectance spectra (Figure 7). Vectors represent photoreceptor outputs."}
knitr::include_graphics('./RNLmodel4.png', auto_pdf = FALSE)
```
### `radarplot(model)`
`radarplot` plots quantum catches or $E$-values into a radar plot.
```{r echo=TRUE, fig.cap="Figure 17. Radar plot representing `CTTKmodel( )` photoreceptor inputs of a pentachromatic animal. Each polygon denotes a reflectance spectrum in Figure 7.", fig.align="center", fig.width=4, fig.height=4, out.width='300px'}
CTTKmodel5<-CTTKmodel(R=R, I=D65, C=photor(c(350,410,470,530,590),beta.band=FALSE), Rb=Rb)
radarplot(CTTKmodel5, item="Qr", item.labels=TRUE, border=colours)
```
```{r echo=TRUE, fig.cap="Figure 18. Radar plot representing `CTTKmodel( )` photoreceptor outputs of a pentachromatic animal. Each polygon denotes a reflectance spectrum in Figure 7.", fig.align="center", fig.width=4, fig.height=4, out.width='300px'}
radarplot(CTTKmodel5, item="E", item.labels=TRUE, ann=FALSE, xaxt = "n", yaxt = "n", ylim=c(-1.2,1.2), xlim=c(-1.2,1.2), border=colours)
```
# References