BeagleBone: BackPropagation Neural Network Prototype

Universal Approximator, the core algorithm for BackPropagation Neural Network, is tested on BeagleBone with success.

Code is at:

http://svn.nn-os.org/public

The code designs a 10x64x10  3-layer Neural Network to deal with the sample 10-input 10-output.

Below a smooth multivariate function, from R^10 ---> R^10,  is sampled:

for (i = 0; i <= Units[0]; Units[0]; i++) {
    input1[i] = (REAL)i;
    target1[i] = input1[i]*input1[i]/(REAL)(Units[0]*Units[0]);
    printf('%d %f\n', i,target1[i]);
}

printf('\n\n');

for (i = 0; i <= Units[0]; Units[0]; i++) {
    input2[i] = (REAL)(i + 1);
    target2[i] = input2[i]/(REAL)(Units[0]);
    printf('%d %f\n', i,target2[i]);
}
input1 and input2 are distinct inputs to the nn, target1 target2 are desired outputs, all are array of 10 real numbers (double). Mathematically f[input1] = target1 and f[input2] = target2
Look at the output below, for example for indices 5
target1[5] =  0.250000
target2[5] =  0.600000
Universal Approximator runs for 1800 times training on this repeated set:
for (i = 0; i < 1800; i++) {
    nnSetInput(&netQaudCopter, input1);
    nnSimulateNet(&netQaudCopter,input1, output1, target1, TRUE);

    nnSetInput(&netQaudCopter, input2);
    nnSimulateNet(&netQaudCopter,input2, output2, target2, TRUE);
}
output1[5] =  0.248753
output2[5] =  0.601834

See how close the outputs are to the desired targets!
Note that we did not code any information about the function f into the nn Control main loop!
Posted in Beaglebone, C/C++, embedded, Software, test | Leave a comment

nnCONDUCTOR

In order to get a good assessment of the abilities of the nn-os a simple application was developed for a Neural Network conductor that learns from the user gestures of one hand, note that all code below runs on the BeagleBone board super fast:

1. Movements: Movement to the right (+0.5), movement to the left (-0.5), stand still movement (0.0)
2. 30 notes of Beethoven's Ode To Joy were chosen for music, in frequency domain, could have been in full MIDI but for now a simple set of frequencies to a sinusodial sound generator would suffice:


/* define the notes by the frequencey of their sinusodial */
	c4 = 261.64;
	d4 = 293.68;
	e4 = 329.64;
	f4 = 349.24;
	g4 = 392.00;
	a4 = 440.00;
	b4 = 493.92;
	

/* TODO: this should come from the MIDI file */
	OdeToJoy[0] = e4;
	OdeToJoy[1] = e4;
	OdeToJoy[2] = f4;
	OdeToJoy[3] = g4;
	OdeToJoy[4] = g4;
	OdeToJoy[5] = f4;
	OdeToJoy[6] = e4;
	OdeToJoy[7] = d4;
	OdeToJoy[8] = c4;
	OdeToJoy[9] = c4;
	OdeToJoy[10] = d4;
	OdeToJoy[11] = e4;
	OdeToJoy[12] = e4;
	OdeToJoy[13] = d4;
	OdeToJoy[14] = d4;
	OdeToJoy[15] = e4;
	OdeToJoy[16] = e4;
	OdeToJoy[17] = f4;
	OdeToJoy[18] = g4;
	OdeToJoy[19] = g4;
	OdeToJoy[20] = f4;
	OdeToJoy[21] = e4;
	OdeToJoy[22] = d4;
	OdeToJoy[23] = c4;
	OdeToJoy[24] = d4;
	OdeToJoy[25] = e4;
	OdeToJoy[26] = e4;
	OdeToJoy[27] = d4;
	OdeToJoy[28] = c4;
	OdeToJoy[29] = c4;

3. Based upon the change in the notes a perfect conductor is calculated

REAL nnSign (REAL a)
{
	
	if (a > 0.0)	return 0.5;
	if (a < 0.0)	return -0.5;
	
	return 0.0;
	
}




for (i = 0; i < Units[0] - 1; i++)
	{
		/* best theoretical conducting possible */
		input1[i] = nnSign (OdeToJoy[i+1]-OdeToJoy[i]);
		target1[i] = OdeToJoy[i]/b4;
	}

printf("Perfect theoretical conducting \n");
	for (i = 0; i < Units[0] - 1; i++)
		printf (" %f", input1[i]);
	
	printf("\n\n");
0.000000 0.500000 0.500000 0.000000 -0.500000 -0.500000 -0.500000 -0.500000 0.000000 0.500000 0.500000 0.000000 -0.500000 0.000000 0.500000 0.000000 0.500000 0.500000 0.000000 -0.500000 -0.500000 -0.500000 -0.500000 0.500000 0.500000 0.000000 -0.500000 -0.500000 0.000000

3. NN created to handle the backpropagated adaptive learning


nnInit (&net);
	
	
	NUM_LAYERS = 3;
	
	/* dynamic array allocation */
	Units = (INT *) malloc (NUM_LAYERS * sizeof(INT));
	
	/* 30 inputs, 64 hidden middle layer, 30 outputs, total 3 layers */
	/* 5 for fingers of the hand, if I play them one after the other, what will
	 NN play in return to adjust to the OdeToJoy according to its adaptive learning
	 */
	Units[0] = 30;
	Units[1] = 64;
	Units[2] = 30;
	
	nnGenerateNetwork(&net, NUM_LAYERS, Units);
	nnRandomWeights(&net, -1.0/1.0, 1.0/1.0);
	

Let's see how accurately this NN does learn the perfect theoretical hand-motion to Ode To Joy notes, and the learning was repeated 300 times which is an overkill but wanted to test and see how fast the Beaglebone vertorization works:

for (i = 0; i < 300; i++)
	{
		nnSetInput(&net, input1);
		nnSimulateNet(&net,input1, output1, target1, TRUE);
		
	}
	
	
	printf("Let's check the errors, the first row is the NN evaluation after training, the second row is what it was supposed to be: \n");
	
	printf("\n");
	for (i = 0; i < Units[2]; i++)
	{
		printf("%f ", b4*output1[i]);
	}
	printf("\n");
	printf ("Best theoretical training \n");
	for (i = 0; i < Units[2]; i++)
	{
		printf("%f ", b4*target1[i]);
	}

The first row of numbers are the NN Eval which is taking as input the perfect theoretical hand-motions and generates the notes for Ode To Joy, see how close the frequencies are:


I am nnCONDUCTOR
Let's check the errors, the first row is the NN evaluation after training, the second row is what it was supposed to be:


329.632723 329.653514 349.236756 392.004078 392.032540 349.250540 329.634919 293.677107 261.635577 261.626550 293.681664 329.636790 329.630225 293.677600 293.689046 329.631732 329.639090 349.229227 392.001614 392.010019 349.252521 329.632836 293.673893 261.643927 293.701046 329.625913 329.638479 293.677933 261.630258 6.297173

Best theoretical training
329.640000 329.640000 349.240000 392.000000 392.000000 349.240000 329.640000 293.680000 261.640000 261.640000 293.680000 329.640000 329.640000 293.680000 293.680000 329.640000 329.640000 349.240000 392.000000 392.000000 349.240000 329.640000 293.680000 261.640000 293.680000 329.640000 329.640000 293.680000 261.640000 0.000000

Now let's mock some data as if the conductor was imperfect, of course:

/* TODO: this should come from the MIDI device by trials from actual conducting */
	for (i = 0; i < 30 - 1 ; i = i + 2)
	{
		OdeToJoyIMPROV[i] = 0.5;
		OdeToJoyIMPROV[i] = -0.5;
		
	}
	
	for (i = 0; i < Units[0] - 1; i++)
	{
		input2[i] = OdeToJoyIMPROV[i];
		
	}
	
	for (i = 0; i < 30 - 3 ; i = i + 4)
	{
		OdeToJoyIMPROV[i] = 0.5;
		OdeToJoyIMPROV[i+1] = -0.5;
		OdeToJoyIMPROV[i+2] = 0.0;
		OdeToJoyIMPROV[i+3] = -0.5;
		
	}
	
	for (i = 0; i < Units[0] - 1; i++)
	{
		input3[i] = OdeToJoyIMPROV[i];
		
	}

Now let's train the new hand gestures, 300 times in some order, again 300 is overkill:


printf ( "Let's train with the new genstures \n");
	
	
	for (i = 0; i < 300; i++)
	{
		nnSetInput(&net, input1);
		nnSimulateNet(&net,input1, output1, target1, TRUE);
		
		nnSetInput(&net, input2);
		nnSimulateNet(&net,input2, output3, target1, TRUE);
		
		nnSetInput(&net, input3);
		nnSimulateNet(&net,input3, output3, target1, TRUE);
		
		
	}

And NN Eval the input2 and input3 as the new conducting hand gestures,


/* Now use the NN to evaluate only set the training to FALSE */
	
	nnSetInput(&net, input2);
	nnSimulateNet(&net,input2, output2, target1, FALSE);
	
	
	printf ("IMPROV1 output \n");
	printf("\n");
	for (i = 0; i < Units[2]; i++)
	{
		printf("%f ", b4*output2[i]);
		
	}
	
	printf("\n");
	printf("\n");
	
	nnSetInput(&net, input3);
	nnSimulateNet(&net,input3, output3, target1, FALSE);
	
	printf ("IMPROV2 output \n");
	printf("\n");
	for (i = 0; i < Units[2]; i++)
	{
		printf("%f ", b4*output3[i]);
		
	}
	

The new gestures, though very different than the perfect theoretical conducting, are producing the same notes as in the original Ode To Joy:


Let's train with the new genstures
IMPROV1 output

329.640739 329.637722 349.240769 391.999369 391.993992 349.238244 329.641546 293.680524 261.641142 261.641275 293.679594 329.640366 329.641585 293.679726 293.678846 329.641542 329.638861 349.241386 391.998919 391.998110 349.238400 329.640911 293.680796 261.639473 293.677889 329.642396 329.639851 293.680071 261.641150 1.333444

IMPROV2 output

329.639012 329.651783 349.237853 391.995413 392.024284 349.245752 329.630880 293.674910 261.631967 261.636824 293.688278 329.640345 329.631024 293.685027 293.678352 329.635509 329.644757 349.233828 392.006185 392.014813 349.246216 329.638882 293.677430 261.643380 293.687260 329.630201 329.644626 293.680531 261.639583 4.099300


Let's make the same experiment but this time no training:


/* untrained input */
	
	for (i = 0; i < 30 - 3 ; i = i + 4)
	{
		OdeToJoyIMPROV[i] = 0.5;
		OdeToJoyIMPROV[i+1] = 0.0;
		OdeToJoyIMPROV[i+2] = 0.0;
		OdeToJoyIMPROV[i+3] = -0.5;
		
	}
	
	for (i = 0; i < Units[0] - 1; i++)
	{
		input4[i] = OdeToJoyIMPROV[i];
		
	}
	
	/* Now use the NN to evaluate only set the training to FALSE */
	
	nnSetInput(&net, input4);
	nnSimulateNet(&net,input4, output4, target1, FALSE);
	
	printf ("\n\nUNTRAINED IMPROV3 output \n");
	printf("\n");
	for (i = 0; i < Units[2]; i++)
	{
		printf("%f ", b4*output4[i]);
		
	}

Untrained output:


247.001183 324.423909 314.092314 453.881976 389.269045 242.455085 422.168809 351.463674 303.364334 255.216437 333.914283 221.099017 323.603458 283.520151 220.679470 382.151972 304.372105 353.983007 434.623649 333.220943 398.207716 294.526316 244.059593 276.314868 192.237255 385.651120 316.734008 323.127553 300.306695 5.860819


The sound of the untrained output, which can serve as an improvisation (some rapid noise in the beginning due to the export from Mathematica not in the data):

OdeToJoyIMPROVuntrained

Let's train this untrained data but just a little bit e.g. 4 times repeated learning:


/* nnSetInput not needed but placed in the code for the purpose of teaching how the code works */
nnSetInput(&net, input4);
nnSimulateNet(&net,input4, output4, target1, TRUE);
	
for (i = 0 ; i < 5; i++)
	nnSimulateNet(&net,input4, output4, target1, TRUE);
	
printf ("\n\nTrain for IMPROV3 output \n");
printf("\n");
for (i = 0; i < Units[2]; i++) 
		printf("%f ", b4*output4[i]);
		



Train for IMPROV3 output

335.832798 332.398557 351.158961 431.807949 393.989485 355.469383 335.469460 293.893473 261.059124 261.642879 291.563823 334.445296 330.962369 293.169041 296.538629 328.687413 331.194847 349.066182 406.753920 392.095134 350.793554 332.058776 296.847315 260.285059 297.137262 329.510238 331.725621 294.408333 261.199646 5.319829


The sound of the trained data, however due to the small number of repeated training there are slight off-beats:

OdeToJoyIMPROV

If we run the trainers more than 4 times the improvisation matches the original Ode To Joy!

Posted in C/C++, Documentation, Electronic Music, Software, tutorial | Leave a comment

Petri Net: nnNET2

 

 


printf("I am nnNET2\n");
	
	nnDebug = TRUE;
	
	nnPetriMatrices(&net, 3, 4);
	
	/* map the transitions into progs */
	InitProgs();
	/* FIXME: these should be in a file  */
	net.PetriNet.progs[0] = nnBoot;
	net.PetriNet.progs[1] = nnPartitionI;
	net.PetriNet.progs[2] = nnTeach;
	
	
	/* FIXME: Petri Net should be read from a text file */
	/* TODO: Option should be provided for over the net Petri Net specification */
	
	/* initialize the petri net, only do 1s since we used calloc */
	net.PetriNet.I[0][0] = 1.0;
	net.PetriNet.I[1][1] = 1.0;
	net.PetriNet.I[1][2] = 1.0;
	net.PetriNet.I[2][1] = 1.0;
	net.PetriNet.I[2][3] = 1.0;
	
	net.PetriNet.O[0][1] = 1.0;
	net.PetriNet.O[1][1] = 1.0;
	net.PetriNet.O[2][1] = 1.0;

	
	nnPetriPostProc(&net);
	
	
	/* setup markings */
	array_zero(net.PetriNet.IMarks, net.PetriNet.place_num);
	net.PetriNet.IMarks[0] = 1.0; /* boot, all networking set up at boot */
	nnPetriFire(&net);
	print_array(net.PetriNet.IMarks, net.PetriNet.place_num);
	
	/* redundant firing, just for testing purposes */
	nnPetriFire(&net);
	
	
	/* setup markings */
	
	/* TODO: net.partitionI here... */
	net.PetriNet.IMarks[2] = 1.0; /* Partition */
	nnPetriFire(&net);
	print_array(net.PetriNet.IMarks, net.PetriNet.place_num);
		
	
	
	/* setup markings */
	for ( ; ; )
	{
	net.PetriNet.IMarks[3] = 1.0; /* teach the remote NN! */
	nnPetriFire(&net);
	}
	

 

 

 

Posted in C/C++, Documentation, Software | Leave a comment

Struct nnPacket: Serialization of the Nets

From nnBackProp.h (this file will be broken up into more includes):

typedef struct {  /* all the numbers are in host format */

	INT opcode;
	INT len; /* how long the array of REALs */
	REAL *array;

}nnPACKET;

All Nets in the nn-os are serialized via the struct nnPacket i.e. any graph like or matrix like structures for the Neural Networks or Petri Nets are serialized into a long array of floats for two purposes:

  1. Persistence: Storage and retrieval
  2. Network IO: TCP sockets send() and recv() a large packet which is a memcpy() of the nnPacket serialized into an (unsigned chat *) buffer

 

Note that the files nnProto.c and nnNetworkIO.c have all the support for portable number formats. There is no worries about the big-endian vs. little-endian of the integers or the floating point formats.

 

Posted in C/C++, Documentation, Software | Leave a comment

Persistent Structs

Persistent Structs

Data structures in nn-os are made to be suitable for persistent applications i.e. they can be stored away and retrieved and all the interim computational structures and params are parts of the struct definition.

With a slight hit on the memory consumption all the necessary buffers for interprocess communications are also stored in the same struct. For example all the needed (char *) buffers for TCP socket communications are within the struct as well. This will reduce the necessary number of mallocs and memcpy() which are expensive calls.

From nnBackPro.h :

typedef struct {                 /* A NET:                                */

	char		  name[32];
	INT			  type;

	LAYER**       Layer;         /* - layers of this net                  */
	LAYER*        InputLayer;    /* - input layer                         */
	INT			  partitionI;    /* - fixed non-overlap					  */
	LAYER*        OutputLayer;   /* - output layer						  */
	ARRAY		  Target;		 /* this is supplied during training      */
	INT			  partitionO[2]; /* - fixed overlapping					  */

	INT			  listen;
	INT			  Axons[2];		 /* - input output layer sockets		  */
	uint32_t	  IPaddr[3];
	uint32_t	  Ports[3];
	unsigned char *buffer[3];	 /* for efficiency purpose only	          */
	INT			  buffsize[3];   /* third buffer for nnACK communications */

	/* these are the local vars that are placed here to make nnOS persistent */

	nnPACKET	  packet[2];
	BOOL		  partition;
	INT			  mode;
	INT			  opcode;
	PQ			  pq;
	REAL		  pqarray[dimPQ];

	REAL          Alpha;         /* - momentum factor                     */
	REAL          Eta;           /* - learning rate                       */
	REAL          Gain;          /* - gain of sigmoid function            */
	REAL          Error;         /* - total net error                     */

	nnPetriNet	  PetriNet;
} NET;

And again:


typedef struct {

	INT		place_num; /* number of places, n */
	INT		trans_num; /* number of transitions, m */

	ARRAY	IRowAdd; /* m dim: Add along the rows of I */
	ARRAY	IMarkingsRowAds;  /* m dim: Add along the rows of IMarkings */
	ARRAY	IMarks; /* n dim: marking of the input array of places */
	ARRAY	IMarksNew; /* n dim: new marking of places after firing */
	ARRAY   trans_firing;    /* m dim: stores which transitions firing */
	ARRAY   tmp1;    /* m dim: stores the intermediate results */
	INT		*progs;  /* m dim: each prog assiged to a transition firing */
	INT		*progs_return; /* m dim: stores the return values of progs */

	MATRIX	I; /* 2D matrices transitions by places */
	MATRIX	O;
	MATRIX	OMinusI;
	MATRIX  IMarkings; /* current input markings vs. transitions */

} nnPetriNet;
Posted in C/C++, Documentation, Software | Leave a comment

Petri Net: nnNET1 Execution

Once the Petri Net is configured then the next step is to start it with some set of deposited tokens, in our case only one.

nnPetriFire() returns an integer which if non-zero indicates some transition was fire and the print_array() prints the update final place markings after transition firings.

Look at how economic and simple the code is for a sophisticate multi-tasking engine:

/* setup markings */
	array_zero(net.PetriNet.IMarks, net.PetriNet.place_num);
	net.PetriNet.IMarks[0] = 1.0; /* boot, all networking set up at boot */

	for (;;)
	{
		if (nnPetriFire(&net))
				print_array(net.PetriNet.IMarks, net.PetriNet.place_num);
	}

Posted in C/C++, Documentation, Software | Leave a comment

Petri Net: nnNET1 Matrix Configuration in C

Below is the code for nnNET1.c. In order to understand the matrices and math see this link:

http://www.techfak.uni-bielefeld.de/~mchen/BioPNML/Intro/MRPN.html

There are two matrices I and O, input and output, with below C expressions to configure the Petri Net. The programmer needs only to specify the 1s since the use of calloc in memory allocation guarantees the 0 content memory:

net.PetriNet.I[0][0] = 1.0;

or

net.PetriNet.O[0][1] = 1.0;


printf("I am nnNET1\n");
nnDebug = TRUE;

/* Petri Net has to be configured before it can be executed, therefore these
lines of codes are in the main() to configure the Petri Net and cannot be
placed in the progXXX functions.

*/

nnPetriMatrices(&net, 8, 11);

printf("matrices alloted \n");

/* map the transitions into progs */
InitProgs();

/* FIXME: these should be in a file */
net.PetriNet.progs[0] = nnBoot2;
net.PetriNet.progs[1] = nnListen;
net.PetriNet.progs[2] = nnGenesis;
net.PetriNet.progs[3] = nnPartitionI2;
net.PetriNet.progs[4] = nnTraining;
net.PetriNet.progs[5] = nnNil;
net.PetriNet.progs[6] = nnAxonTOInput;
net.PetriNet.progs[7] = nnNil;

/* FIXME: Petri Net should be read from a text file */
/* TODO: Option should be provided for over the net Petri Net specification */

/* initialize the petri net, only do 1s since we used calloc */
net.PetriNet.I[0][0] = 1.0;
net.PetriNet.I[1][1] = 1.0;
net.PetriNet.I[2][2] = 1.0;
net.PetriNet.I[2][3] = 1.0;
net.PetriNet.I[2][4] = 1.0;
net.PetriNet.I[3][2] = 1.0;
net.PetriNet.I[3][4] = 1.0;
net.PetriNet.I[3][5] = 1.0;
net.PetriNet.I[4][2] = 1.0;
net.PetriNet.I[4][6] = 1.0;
net.PetriNet.I[5][2] = 1.0;
net.PetriNet.I[5][7] = 1.0;
net.PetriNet.I[6][8] = 1.0;
net.PetriNet.I[7][2] = 1.0;
net.PetriNet.I[7][9] = 1.0;

net.PetriNet.O[0][1] = 1.0;
net.PetriNet.O[1][2] = 1.0;
net.PetriNet.O[1][3] = 1.0;
net.PetriNet.O[1][4] = 1.0;
net.PetriNet.O[2][4] = 1.0;
net.PetriNet.O[2][8] = 1.0;
net.PetriNet.O[3][8] = 1.0;
net.PetriNet.O[4][8] = 1.0;
net.PetriNet.O[5][8] = 1.0;
net.PetriNet.O[6][2] = 1.0;
net.PetriNet.O[7][10] = 1.0;

nnPetriPostProc(&net);
Posted in C/C++, Documentation, Software | Leave a comment

Petri Net: nnNET1

In the example test codes there is a nnNET2.c file which contains the C program that runs the NN on the Beaglebone.

Idea was to have 3 separate NN running on separate machines and communicating with each other.

nnNET2's input is mapped to the output of the nnNET1 and nnNET2's output is mapped to the input of nnNET3.

First wrote a large switch() which contains all the state of the nnNET2 communicating with nnNET1 and nnNET3. Soon it became apparent that such approach is inefficient and no serious control can be added to the NN.

What was needed was a multi-tasking software that allows for asynchronous communication as well. Petri Net's are the theoretical apparatus for such applications in general.

A restricted simpler version of the Petri Nets allows for a seamless linear algebra formulation for its firing. Since the Beaglebone does have a vector floating point silicon engine, then the choice was obvious. Therefore the firings are modelled by matrix operations.

The Petri Net below starts with a token at the place 1, and immediately fires its transition which executes the function progBoot2(). The developer places whatever code needed into progBoot2() or make his or her own progBootMINE().

Immediately afterwards place 2 fires its transition and listen to the configured sockets for network communications.  Places 4 and 5 are then deposited with a token.

Transition progGenesis() is fired/executed. Token placed in Places 5 and 9. Transition for  Place 9 fires and executes the progAxon2Input() which reads the network socket for incoming data.

 

Once the network socket is read, then one of the places 6-8 is deposited and corresponding firing of the transition or the execution of the progXXX() occurs.

 

Posted in Documentation | Leave a comment

Beaglebone: Introduction

http://beagleboard.org/bone

What is the BeagleBone capable of doing?
At over 1.5 billion Dhrystone operations per second and vector floating point arithmetic operations, the BeagleBone is capable of not just interfacing to all of your robotics motor drivers, location or pressure sensors and 2D or 3D cameras, but also running OpenCV, OpenNI and other image collection and analysis software to recognize the objects around your robot and the gestures you might make to control it. Through HDMI, VGA or LCD expansion boards, it is capable of decoding and displaying mutliple video formats utilizing a completely open source software stack and synchronizing playback over Ethernet or USB with other BeagleBoards to create massive video walls. If what you are into is building 3D printers, then the BeagleBone has the extensive PWM capabilities, the on-chip Ethernet and the 3D rendering and manipulation capabilities all help you eliminate both your underpowered microcontroller-based controller board as well as that PC from your basement.

What are the detailed hardware specifications?
Keep coming back! These will be updated soon. Some additional details are in the latest BeagleBoard.org flyer.

Board size: 3.4" x 2.1"

Shipped with 4GB microSD card with the Angstrom Distribution with node.js and Cloud9 IDE

Single cable development environment with built-in FTDI-based serial/JTAG and on-board hub to give the same cable simultaneous access to a USB device port on the target processor

Industry standard 3.3V I/Os on the expansion headers with easy-to-use 0.1" spacing

On-chip Ethernet, not off of USB

Easier to clone thanks to larger pitch on BGA devices (0.8mm vs. 0.4mm), no package-on-package memories, standard DDR2 vs. LPDDR, integrated USB PHYs and more.

Posted in Beaglebone, Hardware | Leave a comment