![]() Then, on the 2-D projection, the algorithm identifies a rectangle that includes all the projected feature points with a minimum area, which forms the image representation. developed DeepInsight 17 that projects feature vectors onto a 2-D space using t-SNE 20, which minimizes the Kullback–Leibler divergence between the feature distributions in the 2-D projection space and the original full-dimensional space. To our knowledge, three methods have been developed to transform non-image tabular data into images for predictive modeling using CNNs. A feature is represented by the same pixel (or pixels) in the images of all samples with the pixel intensities vary across images. The transformation converts each sample in the tabular data into an image in which features and their values are represented by pixels and pixel intensities, respectively. This motivates the transformation of tabular data into images, from which CNNs can learn and utilize the feature relationships to improve the prediction performance as compared with models trained on tabular data. For some tabular data, the order of features can be rearranged in a 2-D space to explicitly represent relationships between features, such as feature categories or similarities 17, 18, 19. While the information flows through the layers, low-level features combine and form more abstract high-level features to assemble motifs and then parts of objects, until the identification of whole objects.Īlthough CNNs have been applied for image analysis with great success, non-image data are prevalent in many fields, such as bioinformatics 12, 13, 14, medicine 15, 16, finance, and others, for which CNNs might not be directly applicable to take full advantage of their modeling capacity. When applied on images for object recognition, the bottom layers of CNNs detect low-level local features, such as oriented edges at certain positions. ![]() A particular example is imaging in which the spatial arrangement of pixels carries crucial information of the image content. These features make CNNs suitable for analyzing data with spatial or temporal dependencies between components 10, 11. CNNs are inspired by visual neuroscience and possess key features that exploit the properties of natural signals, including local connections in receptive field, parameter sharing via convolution kernel, and hierarchical feature abstraction through pooling and multiple layers 9. Evaluated on benchmark drug screening datasets, CNNs trained on IGTD image representations of CCLs and drugs exhibit a better performance of predicting anti-cancer drug response than both CNNs trained on alternative image representations and prediction models trained on the original tabular data.Ĭonvolutional neural networks (CNNs) have been successfully used in numerous applications, such as image and video recognition 1, 2, 3, 4, medical image analysis 5, 6, natural language processing 7, and speech recognition 8. Compared with existing transformation methods, IGTD generates compact image representations with better preservation of feature neighborhood structure. We apply IGTD to transform gene expression profiles of cancer cell lines (CCLs) and molecular descriptors of drugs into their respective image representations. The algorithm searches for an optimized assignment by minimizing the difference between the ranking of distances between features and the ranking of distances between their assigned pixels in the image. To meet this challenge, we develop a novel algorithm, image generator for tabular data (IGTD), to transform tabular data into images by assigning features to pixel positions so that similar features are close to each other in the image. However, most tabular data do not assume a spatial relationship between features, and thus are unsuitable for modeling using CNNs. ![]() ![]() Convolutional neural networks (CNNs) have been successfully used in many applications where important information about data is embedded in the order of features, such as speech and imaging.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |