A Statistical Learning Theory Framework for Supervised Pattern Discovery

by   Jonathan H. Huggins, et al.

This paper formalizes a latent variable inference problem we call supervised pattern discovery, the goal of which is to find sets of observations that belong to a single "pattern." We discuss two versions of the problem and prove uniform risk bounds for both. In the first version, collections of patterns can be generated in an arbitrary manner and the data consist of multiple labeled collections. In the second version, the patterns are assumed to be generated independently by identically distributed processes. These processes are allowed to take an arbitrary form, so observations within a pattern are not in general independent of each other. The bounds for the second version of the problem are stated in terms of a new complexity measure, the quasi-Rademacher complexity.


page 1

page 2

page 3

page 4


Latent Variable Discovery Using Dependency Patterns

The causal discovery of Bayesian networks is an active and important res...

Uniform Risk Bounds for Learning with Dependent Data Sequences

This paper extends standard results from learning theory with independen...

Pattern Discovery and Validation Using Scientific Research Methods

Pattern discovery, the process of discovering previously unrecognized pa...

Descriptions of Objectives and Processes of Mechanical Learning

In [1], we introduced mechanical learning and proposed 2 approaches to m...

Skopus: Exact discovery of the most interesting sequential patterns under Leverage

This paper presents a framework for exact discovery of the most interest...

'1e0a': A Computational Approach to Rhythm Training

We present a computational assessment system that promotes the learning ...

Learning in the Presence of Corruption

In supervised learning one wishes to identify a pattern present in a joi...

Please sign up or login with your details

Forgot password? Click here to reset