These bug fixes and features are scheduled for the upcoming
releases.
BUG: Fix the DCV code with convolutions (especially the quartic
one)
BUG: LOO estimation: instead of dropping unique (X, Y) observations,
leave each conditioning points (only X)
BUG: Fix the optimiser control argument in bw.CV(), add
log() for non-negativity and better scaling.
SYNTAX: kernelSmooth(), being a local average, should
have na.rm and check the inputs
SYNTAX: In kernelDiscreteDensitySmooth(), remove the
table attribute and change the test.
SYNTAX: Create a summary class for SEL; print numerical gradients of
lambdas; print the number of converged inner optimisation problems
FEATURE: Speed up interpolation by memoising it
FEATURE: Check if only 4 points, as opposed to 6, are required for
extrapolation in weightedEL0
FEATURE: Create a class for smoothing that would yield LOESS
smoothing matrices, with ranks or distances
FEATURE: For sparseVectorToList(), the default
trim(x) should be such that the sum of sorted weights
exceeds 0.99999999:
trim = \(w) min(which(cumsum(sort(w / sum(w), decreasing = TRUE)) > 1 - 1e-8))
FEATURE: Create convolution for kernel orders 4 and 6
FEATURE: add convergence check in brentZero(), like in
uniroot().
FEATURE: De-duplicate at kernel weights already (via
.prepareKernel()), return the attribute
FEATURE: For .prepareKernel() AND mixed kernel: check
if the max. column-wise gap between observations is >= than the
bandwidth, otherwise write an informative message
FEATURE: Make DCV either sparse or memsave, not both; reflect the
changes in bw.CV()
FEATURE: Remove parallelisation over workers via setThreadOptions
when there is outer parallelisation in .kernelMixed()
FEATURE: Move the de-duplication of the xout grid inside
kernelSmooth
FEATURE: Create a default value for memsave and when to
invoke it (based on nx*ng)
FEATURE: Add weight support to
kernelDiscreteDensitySmooth()
FEATURE: Eliminate matrices in smoothing completely, try only
parallel loops
FEATURE: CV: implement leave-K-out CV for speed
FEATURE: In kernelMixedSmooth(): if LOO, do not
de-duplicate xout, copy it from arg$x
(currently mitigated via deduplicate.xout = FALSE)
FEATURE: All LOO to the C++ density function
FEATURE: Add custom kernels to Silverman’s rule of thumb (with
roughness != 1)
FEATURE: Check: if the kernel is finite-support and bandwidth is
smaller than the largest gap between two observations, then, set the
bandwidth in that dimension to 1.1 times that gap.
kernelSmooth() and kernelDensity() should have
an argument for increasing small bandwidths in case of zero weights to
match the largest gap divided by 2 (times 1.1 to have at least some
coverage)
FEATURE: Like in the SEL application: de-duplicate the input matrix,
replace with weights; allow the user to disable it
MISC: Add references to AEL and BAEL (Chen 2008, Emerson & Owen
2009, … 2011)
MISC: Check analytical expressions for all combinations of kernels,
convolutions, and orders in Sage and Mathematica, publish the
reproducing codes
MISC: Reproduce the CKT (2019) results with the shift
argument (i.e. test the shift)
MISC: Add a vignette for non-parametric methods to GitHub, finish
the mixed-smoothing part
DEV: Check all instances of kernelSmooth(),
kernelDensity(), kernelWeights(), and
everything that used obsolete arguments in the examples
DEV: Too much CPU time in the examples for
kernelDensity and kernelSmooth (>5 s) (make
dontrun?)
DEV: Add
RcppParallel::setThreadOptions(numThreads = "auto") as the
1st line of parallel-capable functions, use setDTthreads
also (check how data.table does it)
DEV: Write test cases for C++ functions with and without
speed-ups