![]() Many types of information (e.g., language, music, and gene) can be represented as sequential data that often contains related information separated by many time steps, and these long-term dependencies are difficult to model as we must retain information from the whole sequence with greater complexity of the model ( Trinh et al., 2018 Liu et al., 2019 Shewalkar, 2019 Yu et al., 2019 Zhao et al., 2020). By comparing the average accuracy of real datasets with long short-term memory, Bi-LSTM, gated recurrent units, and MCNN and calculating the main indexes (Accuracy, Precision, Recall, and F1-score), it can be observed that our method can improve the average accuracy and optimize the structure of the recurrent neural network and effectively solve the problems of exploding and vanishing gradients.ĭata classification is one of the most important tasks for different applications, such as text categorization, tone recognition, image classification, microarray gene expression, and protein structure prediction ( Choi et al., 2017 Johnson and Zhang, 2017 Malhotra et al., 2017 Aggarwal et al., 2018 Fang et al., 2018 Mikołajczyk and Grochowski, 2018 Kerkeni et al., 2019 Saritas and Yasar, 2019 Yildirim et al., 2019 Chandrasekar et al., 2020). It provides six pathways so as to fully and deeply explore the effect and influence of historical information on the RNNs. For each method, there are two ways for historical information addition: 1) direct addition and 2) adding weight weighting and function mapping to activation function. To include the historical information, we design two different processing methods for the SS-RNN in continuous and discontinuous ways, respectively. At the same time, for the time direction, it can improve the correlation of states at different moments. It can enhance the long-term memory ability. To solve these problems, this paper proposes a new algorithm called SS-RNN, which directly uses multiple historical information to predict the current time information. However, they have problems such as insufficient memory ability and difficulty in gradient back propagation. Neighbor classifier based on Dynamic Time Warping.Recurrent neural networks are widely used in time series prediction and classification. Over the embeddings given by a domain-specific RNN, as well as (ii) a nearest Yields significantly better performance compared to (i) a classifier learned Vehicles, we observe that a classifier learned over the TimeNet embeddings For several publicly availableĭatasets from UCR TSC Archive and an industrial telematics sensor data from Useful for time series classification (TSC). Representations or embeddings given by a pre-trained TimeNet are found to be Once trained, TimeNet can be usedĪs a generic off-the-shelf feature extractor for time series. Series from several domains simultaneously. To generalize time series representation across domains by ingesting time Rather than relying on data from the problem domain, TimeNet attempts Using sequence to sequence (seq2seq) models to extract features from time Neural network (RNN) trained on diverse time series in an unsupervised manner Generic feature extractors for images, we propose TimeNet: a deep recurrent Download a PDF of the paper titled TimeNet: Pre-trained deep recurrent neural network for time series classification, by Pankaj Malhotra and 4 other authors Download PDF Abstract: Inspired by the tremendous success of deep Convolutional Neural Networks as
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |