64-bit Support: Multiplication to 128bit

General FreeBASIC programming questions.
albert
Posts: 5676
Joined: Sep 28, 2006 2:41
Location: California, USA

Re: 64-bit Support: Multiplication to 128bit

Postby albert » Dec 27, 2014 18:30

I pointed it to the 64 bit compiler and took out -asm intel and it compiled and ran okay..
stephanbrunker
Posts: 62
Joined: Nov 02, 2013 14:57

Re: 64-bit Support: Multiplication to 128bit

Postby stephanbrunker » Mar 10, 2015 0:48

Sorry for the late reply ... working on other topics ...

@MichaelW: Thank you for the 64 x 64 -> 128bit asm. I'll try this out - hopefully I can speed up the RFC4418 Umac (Carter-Wegman MAC) some magnitudes on x64 systems vs. the x86 version, because it contains these operations. If / when I get the time for it ...

@dkl: Is it possible to combine these native operations with some folding in a 128bit datatype the same the 64bit was made for the x86 compiler? I assume that on a x86 platform the ulong x ulong = ulongint was made in one imul operation? Cross-compiling wouldn't be possible then, but you've to write 32bit and 64bit versions of your programs anyway to optimize the speed. Most hashfunctions have two versions anyway with DWORD and QWORD matrix sizes.

@albert: Offtopic, but ... I just did a course on Coursera on Cryptography, and the first rule is: Don't design your own cryptosystems (the second one: don't implement them yourself ...). Because it's much too easy to overlook something and then there's much likely not only an 2^128 attack on your system (if the key length is 128-bit), but a very much simpler one - just think of WPS (which is badly broken). If you scramble the message bits into "random" ones, it's a little bit like steganography. But the problem is the generation of your random bits. Most cryptographic functions are pseudorandom functions. For example, the best known attack (2007) against Salsa20 with 8 rounds is 2^249 (256bit keylength), so the full 20 rounds seems secure .. if there doesn't exist an unpublished attack found by the intelligence services - don't know. So, if your "random" bits aren't really random, your message is visible in front of them like a spray painting on a woodchip wallpaper, so to speak ... . The most likely way to crack an encryption is not the full-on brute force attack (most unlikely), but using backdoors, timing attacks, infiltrate the systems of sender or reciever to get the plain text or key, man-in-the-middle-attacks and so on ... I only implement some Crpyto because it's a fascinating topic, but I'm sure I made some mistakes so the results aren't secure even if the functions themselves are.
albert
Posts: 5676
Joined: Sep 28, 2006 2:41
Location: California, USA

Re: 64-bit Support: Multiplication to 128bit

Postby albert » Mar 10, 2015 2:34

@stephenbrunker

My cypher breaks a message into bit lengths of 256 , 512 or 1024 it then generates 4x as many garbage bits
It then scrambles the message bits into the garbage bits..

Say your using the 256 bit cypher ,
Your message is broken into lengths of 256 bits and padded if the last block isn't an exact 256 bits.
It then generates 1024 bits of random garbage, it then scrambles the message bits into the garbage bits.
The user can enter the scramble order or click generate. The output is converted from nibbles to a randomly generated uppercase alpha character.

Here they are in FNXBasic , http://fnxbasic.com/cgi-bin/QandA/YaBB. ... 1246910618
I ran them all several hundred times and saved the keys and every one had unique keys, where as if you use my FreeBasic cypher Vari_Cyph it repeats the same keys every time you click generate and everytime you run the program.
So you have to put in a modified algorithim to get true randomness.

And you can cypher multiple times to create a "3 lock box" where it takes 3 keys in a certain order to decypher it..
albert
Posts: 5676
Joined: Sep 28, 2006 2:41
Location: California, USA

Re: 64-bit Support: Multiplication to 128bit

Postby albert » Mar 11, 2015 15:34

My cyphers work like this:

lets say your message is "A"

1000001

we generate 4 times as many garbage bits 32 bits

1010000111111010101010101101010101111001


then we generate a key telling us where in the garbage the message bits are

08 10 01 32 18 30 17 24

so the first message bit goes to location 08

1010000111111010101010101101010101111001 the spot is already a 1 so nothing changes.

the next message bit goes to location 10 , location 10 is 1 , so it becomes a 0

1010000111011010101010101101010101111001

etc....


at then end we take all the nibbles of data and turn them into 16 characters, a sub key
0000 = Q
0001 = F
0010 = L
0011 = G
etc...

To decypher you need to get the sub key in the proper order and convert them back to nibbles then try to figure out where and in what order the message bits are.

Return to “General”

Who is online

Users browsing this forum: deltarho[1859], jj2007 and 7 guests