Mistři v programování na #hovnokod


Anonymous,

pretoze spocitat array sa vyplati

$announcementData = array();
$announcementData["totalRecords"] = count($announcementData);
$announcementData[] = $this->repository->getAllAnnouncements();

Sebastian Simko,

Učení neuronové sítě pomocí backpropagation.

public NeuralNetwork backpropError(double targetResult) {
        neurons[7].setError((targetResult - neurons[7].getOutput()) * neurons[7].derivative());
        neurons[7].setBias(neurons[7].getBias() + LEARNING_RATE * neurons[7].getError());
        neurons[7].getWeights()[0] = neurons[7].getWeights()[0] + LEARNING_RATE * neurons[7].getError() * neurons[1].getOutput();
        neurons[7].getWeights()[1] = neurons[7].getWeights()[1] + LEARNING_RATE * neurons[7].getError() * neurons[2].getOutput();
        neurons[7].getWeights()[2] = neurons[7].getWeights()[2] + LEARNING_RATE * neurons[7].getError() * neurons[3].getOutput();
        neurons[7].getWeights()[3] = neurons[7].getWeights()[3] + LEARNING_RATE * neurons[7].getError() * neurons[4].getOutput();
        neurons[7].getWeights()[4] = neurons[7].getWeights()[4] + LEARNING_RATE * neurons[7].getError() * neurons[5].getOutput();
        neurons[7].getWeights()[5] = neurons[7].getWeights()[5] + LEARNING_RATE * neurons[7].getError() * neurons[6].getOutput();
        neurons[6].setError((neurons[7].getWeights()[5] * neurons[7].getError()) * neurons[6].derivative());
        neurons[6].setBias(neurons[6].getBias() + LEARNING_RATE * neurons[6].getError());
        neurons[6].getWeights()[0] = neurons[6].getWeights()[0] + LEARNING_RATE * neurons[6].getError() * neurons[0].getOutput();
        neurons[5].setError((neurons[7].getWeights()[4] * neurons[7].getError()) * neurons[5].derivative());
        neurons[5].setBias(neurons[5].getBias() + LEARNING_RATE * neurons[5].getError());
        neurons[5].getWeights()[0] = neurons[5].getWeights()[0] + LEARNING_RATE * neurons[5].getError() * neurons[0].getOutput();
        neurons[4].setError((neurons[7].getWeights()[3] * neurons[7].getError()) * neurons[4].derivative());
        neurons[4].setBias(neurons[4].getBias() + LEARNING_RATE * neurons[4].getError());
        neurons[4].getWeights()[0] = neurons[4].getWeights()[0] + LEARNING_RATE * neurons[4].getError() * neurons[0].getOutput();
        neurons[3].setError((neurons[7].getWeights()[2] * neurons[7].getError()) * neurons[3].derivative());
        neurons[3].setBias(neurons[3].getBias() + LEARNING_RATE * neurons[3].getError());
        neurons[3].getWeights()[0] = neurons[3].getWeights()[0] + LEARNING_RATE * neurons[3].getError() * neurons[0].getOutput();
        neurons[2].setError((neurons[7].getWeights()[1] * neurons[7].getError()) * neurons[2].derivative());
        neurons[2].setBias(neurons[2].getBias() + LEARNING_RATE * neurons[2].getError());
        neurons[2].getWeights()[0] = neurons[2].getWeights()[0] + LEARNING_RATE * neurons[2].getError() * neurons[0].getOutput();
        neurons[1].setError((neurons[7].getWeights()[0] * neurons[7].getError()) * neurons[1].derivative());
        neurons[1].setBias(neurons[1].getBias() + LEARNING_RATE * neurons[1].getError());
        neurons[1].getWeights()[0] = neurons[1].getWeights()[0] + LEARNING_RATE * neurons[1].getError() * neurons[0].getOutput();

        return this;

    }

Anonymous,

Protože forcykly jsou moc mainstream.

public function generateHash()
{
    $end = false;
    $j = 0;
    $generator = new TokenGenerator();
    do {
        $hash = $generator->generate(16);
        if ($this->preregistrationRepository->findByHash($hash) == null) {
            $end = true;
        } else {
            $j++;
        }

            // break kvuli zacykleni
        if ($j > 20) {
            throw new \Exception("Preregistrace se nezdařila vytvořit - Nepodařilo se vygenerovat unikátní hash.");
        }
    } while (!$end);
    return $hash;
}

Třešťák Huho,

Není důvod, aby nativní metody měly stejné parametry ve stejném pořadí, že? :)

in_array( $needle, $haystack );
strpos( $haystack, $needle );

Marek Dočekal,