help-guix
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: how to use r-keras?


From: Simon Tournier
Subject: Re: how to use r-keras?
Date: Wed, 08 May 2024 14:34:48 +0200

Hi Ricardo,

On mer., 08 mai 2024 at 12:11, Ricardo Wurmus <rekado@elephly.net> wrote:

> This appears to be a common problem, but we don't know why.  It's
> probably related to the bazel-build-system.  You'll get substitutes only
> if you use `--no-grafts'.

Ah, weird… I miss how the build system could impact the content-address.
Another story. :-)


>> Then I get this:
>>
>> --8<---------------cut here---------------start------------->8---
>> $ guix time-machine -C channels.scm -- shell -C r r-keras -C
>> python-minimal r-reticulate tensorflow@2.13.1
>
> You need python-tensorflow (also from guix-science), not just the
> tensorflow library.

Cool!  It just works!

Thank you.

Cheers,
simon


--8<---------------cut here---------------start------------->8---
$ guix time-machine -C channels.scm \
       -- shell -C r r-keras -C python-minimal r-reticulate tensorflow@2.13.1 
python-tensorflow

[env]$ R

R version 4.3.3 (2024-02-29) -- "Angel Food Cake"
Copyright (C) 2024 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(keras)
> model <- keras_model_sequential()
Would you like to create a default python environment for the reticulate 
package? (Yes/no/cancel) no
2024-05-08 12:30:26.110266: I tensorflow/core/util/port.cc:110] oneDNN custom 
operations are on. You may see slightly different numerical results due to 
floating-point round-off errors from different computation orders. To turn them 
off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-08 12:30:26.137568: I 
tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is 
optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F 
AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the 
appropriate compiler flags.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
WARNING:root:Limited tf.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard 
installation.
> model %>%

  # Adds a densely-connected layer with 64 units to the model:
  layer_dense(units = 64, activation = 'relu') %>%

  # Add another:
  layer_dense(units = 64, activation = 'relu') %>%

  # Add a softmax layer with 10 output units:
  layer_dense(units = 10, activation = 'softmax')
+ + + + + + + + + > 
> model %>% compile(
  optimizer = 'adam',
  loss = 'categorical_crossentropy',
  metrics = list('accuracy')
)
+ + + + > 
> data <- matrix(rnorm(1000 * 32), nrow = 1000, ncol = 32)
labels <- matrix(rnorm(1000 * 10), nrow = 1000, ncol = 10)

model %>% fit(
  data,
  labels,
  epochs = 10,
  batch_size = 32
)
> > > + + + + + Epoch 1/10
32/32 [==============================] - 0s 837us/step - loss: -0.2669 - 
accuracy: 0.1060
32/32 [==============================] - 0s 951us/step - loss: -0.2669 - 
accuracy: 0.1060
Epoch 2/10
32/32 [==============================] - 0s 743us/step - loss: -0.4499 - 
accuracy: 0.1120
32/32 [==============================] - 0s 1ms/step - loss: -0.4499 - 
accuracy: 0.1120  
Epoch 3/10
32/32 [==============================] - 0s 682us/step - loss: -0.6262 - 
accuracy: 0.1050
32/32 [==============================] - 0s 736us/step - loss: -0.6262 - 
accuracy: 0.1050
Epoch 4/10
32/32 [==============================] - 0s 643us/step - loss: -0.8110 - 
accuracy: 0.1150
32/32 [==============================] - 0s 695us/step - loss: -0.8110 - 
accuracy: 0.1150
Epoch 5/10
32/32 [==============================] - 0s 614us/step - loss: -1.0271 - 
accuracy: 0.1270
32/32 [==============================] - 0s 659us/step - loss: -1.0271 - 
accuracy: 0.1270
Epoch 6/10
32/32 [==============================] - 0s 535us/step - loss: -1.1987 - 
accuracy: 0.1480
32/32 [==============================] - 0s 599us/step - loss: -1.1987 - 
accuracy: 0.1480
Epoch 7/10
32/32 [==============================] - 0s 637us/step - loss: -1.4685 - 
accuracy: 0.1230
32/32 [==============================] - 0s 739us/step - loss: -1.4685 - 
accuracy: 0.1230
Epoch 8/10
32/32 [==============================] - 0s 608us/step - loss: -1.7238 - 
accuracy: 0.1380
32/32 [==============================] - 0s 654us/step - loss: -1.7238 - 
accuracy: 0.1380
Epoch 9/10
32/32 [==============================] - 0s 552us/step - loss: -1.9540 - 
accuracy: 0.1270
32/32 [==============================] - 0s 592us/step - loss: -1.9540 - 
accuracy: 0.1270
Epoch 10/10
32/32 [==============================] - 0s 705us/step - loss: -2.2367 - 
accuracy: 0.1160
32/32 [==============================] - 0s 773us/step - loss: -2.2367 - 
accuracy: 0.1160
> 
--8<---------------cut here---------------end--------------->8---



reply via email to

[Prev in Thread] Current Thread [Next in Thread]