A one-bit, comparison-based gradient estimator.
Published in ACHA , 2022
We use tools from one-bit compressed sensing to construct a new algorithm for comparison-based optimization.
Download here
(*= undergraduate coauthor)
Published in ACHA , 2022
We use tools from one-bit compressed sensing to construct a new algorithm for comparison-based optimization.
Download here
Published in AAAI , 2022
We propose a new, much faster, kind of backprop for implicit-depth neural networks.
Download here
Published in submitted , 2021
We show how to convert the problem of minimizing a convex function over the standard probability simplex to that of minimizing a nonconvex function over the unit sphere..
Download here
Published in SIMODS (to appear) , 2021
We further explore the use of shortest path metrics on high dimensional data.
Download here
Published in (Submitted to SIURO), 2021
We provide a criterion for converting ZO algorithms to the comparison-based setting.
Download here
Published in (under review), 2021
We propose a line search algorithm for zeroth-order optimization with low query complexity, both in theory and in practice.
Download here
Published in (under review), 2021
We apply the FPN technology developed in an earlier work to the problem of predicting Nash equilibria in parametrized games.
Download here
Published in ICML , 2021
We propose a new algorithm for ultra-high dimensional black-box optimization (over 1 million variables).
Download here
Published in SIOPT (to appear) , 2021
We propose a new zeroth-order optimization algorithm that uses compressive sensing to approximate gradients.
Download here
Published in GTA3 workshop at IEEE Big Data, 2020
We construct and analyze a knowledge graph for season one of the TV show Veronica Mars.
Download here
Published in SIMODS, 2020
We apply tools from compressive sensing to the problem of finding clusters in graphs.
Download here
Published in Foundations of Data Science, 2019
We study the use of power weighted shortest path distance functions for clustering high dimensional Euclidean data, under the assumption that the data is drawn from a collection of disjoint low dimensional manifolds. We argue, theoretically and experimentally, that this leads to higher clustering accuracy. We also present a fast algorithm for computing these distances
Recommended citation: McKenzie, Daniel and Damelin, Steven. (2019). "Power weighted shortest paths for clustering Euclidean data. " Foundations of Data Science. 1(3). https://arxiv.org/abs/1905.13345